I0106 12:56:29.249834 8 e2e.go:243] Starting e2e run "ce9e7ac3-6b27-499d-a41e-0068bec19cab" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1578315387 - Will randomize all specs Will run 215 of 4412 specs Jan 6 12:56:29.555: INFO: >>> kubeConfig: /root/.kube/config Jan 6 12:56:29.558: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 6 12:56:29.580: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 6 12:56:29.613: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 6 12:56:29.613: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 6 12:56:29.613: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 6 12:56:29.631: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 6 12:56:29.631: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 6 12:56:29.631: INFO: e2e test version: v1.15.7 Jan 6 12:56:29.633: INFO: kube-apiserver version: v1.15.1 SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 12:56:29.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets Jan 6 12:56:29.843: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 6 12:56:29.902: INFO: Create a RollingUpdate DaemonSet Jan 6 12:56:29.911: INFO: Check that daemon pods launch on every node of the cluster Jan 6 12:56:30.012: INFO: Number of nodes with available pods: 0 Jan 6 12:56:30.012: INFO: Node iruya-node is running more than one daemon pod Jan 6 12:56:32.124: INFO: Number of nodes with available pods: 0 Jan 6 12:56:32.124: INFO: Node iruya-node is running more than one daemon pod Jan 6 12:56:33.682: INFO: Number of nodes with available pods: 0 Jan 6 12:56:33.682: INFO: Node iruya-node is running more than one daemon pod Jan 6 12:56:34.031: INFO: Number of nodes with available pods: 0 Jan 6 12:56:34.031: INFO: Node iruya-node is running more than one daemon pod Jan 6 12:56:35.041: INFO: Number of nodes with available pods: 0 Jan 6 12:56:35.041: INFO: Node iruya-node is running more than one daemon pod Jan 6 12:56:36.025: INFO: Number of nodes with available pods: 0 Jan 6 12:56:36.025: INFO: Node iruya-node is running more than one daemon pod Jan 6 12:56:37.743: INFO: Number of nodes with available pods: 0 Jan 6 12:56:37.743: INFO: Node iruya-node is running more than one daemon pod Jan 6 12:56:38.187: INFO: Number of nodes with available pods: 0 Jan 6 12:56:38.187: INFO: Node iruya-node is running more than one daemon pod Jan 6 12:56:39.269: INFO: Number of nodes with available pods: 0 Jan 6 12:56:39.269: INFO: Node iruya-node is running more than one daemon pod Jan 6 12:56:40.113: INFO: Number of nodes with available pods: 1 Jan 6 12:56:40.113: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 6 12:56:41.032: INFO: Number of nodes with available pods: 1 Jan 6 12:56:41.032: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 6 12:56:42.024: INFO: Number of nodes with available pods: 2 Jan 6 12:56:42.024: INFO: Number of running nodes: 2, number of available pods: 2 Jan 6 12:56:42.024: INFO: Update the DaemonSet to trigger a rollout Jan 6 12:56:42.032: INFO: Updating DaemonSet daemon-set Jan 6 12:56:49.095: INFO: Roll back the DaemonSet before rollout is complete Jan 6 12:56:49.107: INFO: Updating DaemonSet daemon-set Jan 6 12:56:49.107: INFO: Make sure DaemonSet rollback is complete Jan 6 12:56:49.341: INFO: Wrong image for pod: daemon-set-bv5lz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 6 12:56:49.342: INFO: Pod daemon-set-bv5lz is not available Jan 6 12:56:50.364: INFO: Wrong image for pod: daemon-set-bv5lz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 6 12:56:50.364: INFO: Pod daemon-set-bv5lz is not available Jan 6 12:56:51.359: INFO: Wrong image for pod: daemon-set-bv5lz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 6 12:56:51.359: INFO: Pod daemon-set-bv5lz is not available Jan 6 12:56:52.677: INFO: Wrong image for pod: daemon-set-bv5lz. Expected: docker.io/library/nginx:1.14-alpine, got: foo:non-existent. Jan 6 12:56:52.677: INFO: Pod daemon-set-bv5lz is not available Jan 6 12:56:53.477: INFO: Pod daemon-set-mk9gd is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6377, will wait for the garbage collector to delete the pods Jan 6 12:56:53.678: INFO: Deleting DaemonSet.extensions daemon-set took: 125.991159ms Jan 6 12:56:54.379: INFO: Terminating DaemonSet.extensions daemon-set pods took: 700.696216ms Jan 6 12:57:00.784: INFO: Number of nodes with available pods: 0 Jan 6 12:57:00.784: INFO: Number of running nodes: 0, number of available pods: 0 Jan 6 12:57:00.789: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6377/daemonsets","resourceVersion":"19519753"},"items":null} Jan 6 12:57:00.792: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6377/pods","resourceVersion":"19519753"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 12:57:00.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6377" for this suite. Jan 6 12:57:06.875: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 12:57:06.998: INFO: namespace daemonsets-6377 deletion completed in 6.192843233s • [SLOW TEST:37.365 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 12:57:06.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name s-test-opt-del-dff70c0e-c58c-4b21-a8a6-0550f4a8a1e9 STEP: Creating secret with name s-test-opt-upd-a899b289-66b0-4f45-aecd-35278488b9a1 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-dff70c0e-c58c-4b21-a8a6-0550f4a8a1e9 STEP: Updating secret s-test-opt-upd-a899b289-66b0-4f45-aecd-35278488b9a1 STEP: Creating secret with name s-test-opt-create-fd548947-959a-4863-bf9d-b850f1dc1a51 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 12:57:23.564: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4486" for this suite. Jan 6 12:57:47.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 12:57:47.754: INFO: namespace projected-4486 deletion completed in 24.18460148s • [SLOW TEST:40.756 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 12:57:47.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 6 12:57:47.872: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 12:57:49.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-2398" for this suite. Jan 6 12:57:55.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 12:57:55.315: INFO: namespace custom-resource-definition-2398 deletion completed in 6.233383524s • [SLOW TEST:7.560 seconds] [sig-api-machinery] CustomResourceDefinition resources /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:35 creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 12:57:55.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 12:58:04.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-4460" for this suite. Jan 6 12:58:18.601: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 12:58:18.698: INFO: namespace replication-controller-4460 deletion completed in 14.141077626s • [SLOW TEST:23.384 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 12:58:18.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1076 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-1076 STEP: Creating statefulset with conflicting port in namespace statefulset-1076 STEP: Waiting until pod test-pod will start running in namespace statefulset-1076 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1076 Jan 6 12:58:27.058: INFO: Observed stateful pod in namespace: statefulset-1076, name: ss-0, uid: d34328fc-c19c-4c0f-804d-caa24dee257f, status phase: Pending. Waiting for statefulset controller to delete. Jan 6 13:03:27.058: INFO: Pod ss-0 expected to be re-created at least once [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 6 13:03:27.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-1076' Jan 6 13:03:29.541: INFO: stderr: "" Jan 6 13:03:29.541: INFO: stdout: "Name: ss-0\nNamespace: statefulset-1076\nPriority: 0\nNode: iruya-node/\nLabels: baz=blah\n controller-revision-hash=ss-6f98bdb9c4\n foo=bar\n statefulset.kubernetes.io/pod-name=ss-0\nAnnotations: \nStatus: Pending\nIP: \nControlled By: StatefulSet/ss\nContainers:\n nginx:\n Image: docker.io/library/nginx:1.14-alpine\n Port: 21017/TCP\n Host Port: 21017/TCP\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4q7ml (ro)\nVolumes:\n default-token-4q7ml:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4q7ml\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Warning PodFitsHostPorts 5m9s kubelet, iruya-node Predicate PodFitsHostPorts failed\n" Jan 6 13:03:29.541: INFO: Output of kubectl describe ss-0: Name: ss-0 Namespace: statefulset-1076 Priority: 0 Node: iruya-node/ Labels: baz=blah controller-revision-hash=ss-6f98bdb9c4 foo=bar statefulset.kubernetes.io/pod-name=ss-0 Annotations: Status: Pending IP: Controlled By: StatefulSet/ss Containers: nginx: Image: docker.io/library/nginx:1.14-alpine Port: 21017/TCP Host Port: 21017/TCP Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-4q7ml (ro) Volumes: default-token-4q7ml: Type: Secret (a volume populated by a Secret) SecretName: default-token-4q7ml Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning PodFitsHostPorts 5m9s kubelet, iruya-node Predicate PodFitsHostPorts failed Jan 6 13:03:29.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-1076 --tail=100' Jan 6 13:03:29.747: INFO: rc: 1 Jan 6 13:03:29.748: INFO: Last 100 log lines of ss-0: Jan 6 13:03:29.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po test-pod --namespace=statefulset-1076' Jan 6 13:03:29.903: INFO: stderr: "" Jan 6 13:03:29.904: INFO: stdout: "Name: test-pod\nNamespace: statefulset-1076\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Mon, 06 Jan 2020 12:58:18 +0000\nLabels: \nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nContainers:\n nginx:\n Container ID: docker://95647109e2086e916c6ae8c720f3d6249bf65674ab5b86b155268aeb78383168\n Image: docker.io/library/nginx:1.14-alpine\n Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\n Port: 21017/TCP\n Host Port: 21017/TCP\n State: Running\n Started: Mon, 06 Jan 2020 12:58:25 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-4q7ml (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-4q7ml:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-4q7ml\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Pulled 5m6s kubelet, iruya-node Container image \"docker.io/library/nginx:1.14-alpine\" already present on machine\n Normal Created 5m4s kubelet, iruya-node Created container nginx\n Normal Started 5m4s kubelet, iruya-node Started container nginx\n" Jan 6 13:03:29.904: INFO: Output of kubectl describe test-pod: Name: test-pod Namespace: statefulset-1076 Priority: 0 Node: iruya-node/10.96.3.65 Start Time: Mon, 06 Jan 2020 12:58:18 +0000 Labels: Annotations: Status: Running IP: 10.44.0.1 Containers: nginx: Container ID: docker://95647109e2086e916c6ae8c720f3d6249bf65674ab5b86b155268aeb78383168 Image: docker.io/library/nginx:1.14-alpine Image ID: docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 Port: 21017/TCP Host Port: 21017/TCP State: Running Started: Mon, 06 Jan 2020 12:58:25 +0000 Ready: True Restart Count: 0 Environment: Mounts: /var/run/secrets/kubernetes.io/serviceaccount from default-token-4q7ml (ro) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: default-token-4q7ml: Type: Secret (a volume populated by a Secret) SecretName: default-token-4q7ml Optional: false QoS Class: BestEffort Node-Selectors: Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s node.kubernetes.io/unreachable:NoExecute for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Pulled 5m6s kubelet, iruya-node Container image "docker.io/library/nginx:1.14-alpine" already present on machine Normal Created 5m4s kubelet, iruya-node Created container nginx Normal Started 5m4s kubelet, iruya-node Started container nginx Jan 6 13:03:29.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs test-pod --namespace=statefulset-1076 --tail=100' Jan 6 13:03:30.036: INFO: stderr: "" Jan 6 13:03:30.037: INFO: stdout: "" Jan 6 13:03:30.037: INFO: Last 100 log lines of test-pod: Jan 6 13:03:30.037: INFO: Deleting all statefulset in ns statefulset-1076 Jan 6 13:03:30.041: INFO: Scaling statefulset ss to 0 Jan 6 13:03:40.160: INFO: Waiting for statefulset status.replicas updated to 0 Jan 6 13:03:40.170: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Collecting events from namespace "statefulset-1076". STEP: Found 10 events. Jan 6 13:03:40.250: INFO: At 2020-01-06 12:58:19 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again. Jan 6 13:03:40.250: INFO: At 2020-01-06 12:58:19 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful Jan 6 13:03:40.250: INFO: At 2020-01-06 12:58:19 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-1076/ss is recreating failed Pod ss-0 Jan 6 13:03:40.250: INFO: At 2020-01-06 12:58:19 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful Jan 6 13:03:40.250: INFO: At 2020-01-06 12:58:19 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Jan 6 13:03:40.251: INFO: At 2020-01-06 12:58:19 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Jan 6 13:03:40.251: INFO: At 2020-01-06 12:58:20 +0000 UTC - event for ss-0: {kubelet iruya-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed Jan 6 13:03:40.251: INFO: At 2020-01-06 12:58:23 +0000 UTC - event for test-pod: {kubelet iruya-node} Pulled: Container image "docker.io/library/nginx:1.14-alpine" already present on machine Jan 6 13:03:40.251: INFO: At 2020-01-06 12:58:25 +0000 UTC - event for test-pod: {kubelet iruya-node} Created: Created container nginx Jan 6 13:03:40.251: INFO: At 2020-01-06 12:58:25 +0000 UTC - event for test-pod: {kubelet iruya-node} Started: Started container nginx Jan 6 13:03:40.262: INFO: POD NODE PHASE GRACE CONDITIONS Jan 6 13:03:40.262: INFO: test-pod iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 12:58:18 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 12:58:26 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 12:58:26 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 12:58:18 +0000 UTC }] Jan 6 13:03:40.262: INFO: Jan 6 13:03:40.277: INFO: Logging node info for node iruya-node Jan 6 13:03:40.285: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-node,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-node,UID:b2aa273d-23ea-4c86-9e2f-72569e3392bd,ResourceVersion:19520380,Generation:0,CreationTimestamp:2019-08-04 09:01:39 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-node,kubernetes.io/os: linux,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.1.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-10-12 11:56:49 +0000 UTC 2019-10-12 11:56:49 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-01-06 13:03:00 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-06 13:03:00 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-06 13:03:00 +0000 UTC 2019-08-04 09:01:39 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-06 13:03:00 +0000 UTC 2019-08-04 09:02:19 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.3.65} {Hostname iruya-node}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:f573dcf04d6f4a87856a35d266a2fa7a,SystemUUID:F573DCF0-4D6F-4A87-856A-35D266A2FA7A,BootID:8baf4beb-8391-43e6-b17b-b1e184b5370a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15] 246640776} {[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0] 195659796} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10] 61365829} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[gcr.io/kubernetes-e2e-test-images/nettest@sha256:6aa91bc71993260a87513e31b672ec14ce84bc253cd5233406c6946d3a8f55a1 gcr.io/kubernetes-e2e-test-images/nettest:1.0] 27413498} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0] 11443478} {[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1] 9349974} {[gcr.io/kubernetes-e2e-test-images/hostexec@sha256:90dfe59da029f9e536385037bc64e86cd3d6e55bae613ddbe69e554d79b0639d gcr.io/kubernetes-e2e-test-images/hostexec:1.1] 8490662} {[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0] 6757579} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1] 5851985} {[gcr.io/kubernetes-e2e-test-images/liveness@sha256:71c3fc838e0637df570497febafa0ee73bf47176dfd43612de5c55a71230674e gcr.io/kubernetes-e2e-test-images/liveness:1.1] 5829944} {[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest] 5496756} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0] 4732240} {[gcr.io/kubernetes-e2e-test-images/porter@sha256:d6389405e453950618ae7749d9eee388f0eb32e0328a7e6583c41433aa5f2a77 gcr.io/kubernetes-e2e-test-images/porter:1.0] 4681408} {[gcr.io/kubernetes-e2e-test-images/entrypoint-tester@sha256:ba4681b5299884a3adca70fbde40638373b437a881055ffcd0935b5f43eb15c9 gcr.io/kubernetes-e2e-test-images/entrypoint-tester:1.0] 2729534} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0] 1450451} {[busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e busybox:latest] 1219782} {[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29] 1154361} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Jan 6 13:03:40.286: INFO: Logging kubelet events for node iruya-node Jan 6 13:03:40.292: INFO: Logging pods the kubelet thinks is on node iruya-node Jan 6 13:03:40.307: INFO: weave-net-rlp57 started at 2019-10-12 11:56:39 +0000 UTC (0+2 container statuses recorded) Jan 6 13:03:40.307: INFO: Container weave ready: true, restart count 0 Jan 6 13:03:40.307: INFO: Container weave-npc ready: true, restart count 0 Jan 6 13:03:40.307: INFO: test-pod started at 2020-01-06 12:58:18 +0000 UTC (0+1 container statuses recorded) Jan 6 13:03:40.307: INFO: Container nginx ready: true, restart count 0 Jan 6 13:03:40.307: INFO: kube-proxy-976zl started at 2019-08-04 09:01:39 +0000 UTC (0+1 container statuses recorded) Jan 6 13:03:40.307: INFO: Container kube-proxy ready: true, restart count 0 W0106 13:03:40.316918 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 6 13:03:40.409: INFO: Latency metrics for node iruya-node Jan 6 13:03:40.409: INFO: Logging node info for node iruya-server-sfge57q7djm7 Jan 6 13:03:40.426: INFO: Node Info: &Node{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:iruya-server-sfge57q7djm7,GenerateName:,Namespace:,SelfLink:/api/v1/nodes/iruya-server-sfge57q7djm7,UID:67f2a658-4743-4118-95e7-463a23bcd212,ResourceVersion:19520439,Generation:0,CreationTimestamp:2019-08-04 08:52:09 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{beta.kubernetes.io/arch: amd64,beta.kubernetes.io/os: linux,kubernetes.io/arch: amd64,kubernetes.io/hostname: iruya-server-sfge57q7djm7,kubernetes.io/os: linux,node-role.kubernetes.io/master: ,},Annotations:map[string]string{kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock,node.alpha.kubernetes.io/ttl: 0,volumes.kubernetes.io/controller-managed-attach-detach: true,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:NodeSpec{PodCIDR:10.96.0.0/24,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[],ConfigSource:nil,},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[{NetworkUnavailable False 2019-08-04 08:53:00 +0000 UTC 2019-08-04 08:53:00 +0000 UTC WeaveIsUp Weave pod has set this} {MemoryPressure False 2020-01-06 13:03:39 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientMemory kubelet has sufficient memory available} {DiskPressure False 2020-01-06 13:03:39 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasNoDiskPressure kubelet has no disk pressure} {PIDPressure False 2020-01-06 13:03:39 +0000 UTC 2019-08-04 08:52:04 +0000 UTC KubeletHasSufficientPID kubelet has sufficient PID available} {Ready True 2020-01-06 13:03:39 +0000 UTC 2019-08-04 08:53:09 +0000 UTC KubeletReady kubelet is posting ready status. AppArmor enabled}],Addresses:[{InternalIP 10.96.2.216} {Hostname iruya-server-sfge57q7djm7}],DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:78bacef342604a51913cae58dd95802b,SystemUUID:78BACEF3-4260-4A51-913C-AE58DD95802B,BootID:db143d3a-01b3-4483-b23e-e72adff2b28d,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.15.1,KubeProxyVersion:v1.15.1,OperatingSystem:linux,Architecture:amd64,},Images:[{[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6] 373099368} {[k8s.gcr.io/etcd@sha256:17da501f5d2a675be46040422a27b7cc21b8a43895ac998b171db1c346f361f7 k8s.gcr.io/etcd:3.3.10] 258116302} {[k8s.gcr.io/kube-apiserver@sha256:304a1c38707834062ee87df62ef329d52a8b9a3e70459565d0a396479073f54c k8s.gcr.io/kube-apiserver:v1.15.1] 206827454} {[k8s.gcr.io/kube-controller-manager@sha256:9abae95e428e228fe8f6d1630d55e79e018037460f3731312805c0f37471e4bf k8s.gcr.io/kube-controller-manager:v1.15.1] 158722622} {[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2] 148150868} {[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine] 126894770} {[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine] 123781643} {[gcr.io/google-samples/gb-redisslave@sha256:57730a481f97b3321138161ba2c8c9ca3b32df32ce9180e4029e6940446800ec gcr.io/google-samples/gb-redisslave:v3] 98945667} {[k8s.gcr.io/kube-proxy@sha256:08186f4897488e96cb098dd8d1d931af9a6ea718bb8737bf44bb76e42075f0ce k8s.gcr.io/kube-proxy:v1.15.1] 82408284} {[k8s.gcr.io/kube-scheduler@sha256:d0ee18a9593013fbc44b1920e4930f29b664b59a3958749763cb33b57e0e8956 k8s.gcr.io/kube-scheduler:v1.15.1] 81107582} {[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6] 57345321} {[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2] 49569458} {[k8s.gcr.io/coredns@sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4 k8s.gcr.io/coredns:1.3.1] 40303560} {[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine] 29331594} {[nginx@sha256:57a226fb6ab6823027c0704a9346a890ffb0cacde06bc19bbc234c8720673555 nginx:1.15-alpine] 16087791} {[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine] 16032814} {[gcr.io/kubernetes-e2e-test-images/netexec@sha256:203f0e11dde4baf4b08e27de094890eb3447d807c8b3e990b764b799d3a9e8b7 gcr.io/kubernetes-e2e-test-images/netexec:1.1] 6705349} {[gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 gcr.io/kubernetes-e2e-test-images/redis:1.0] 5905732} {[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0] 4753501} {[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0] 4747037} {[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0] 1563521} {[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1] 742472} {[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest] 239840}],VolumesInUse:[],VolumesAttached:[],Config:nil,},} Jan 6 13:03:40.427: INFO: Logging kubelet events for node iruya-server-sfge57q7djm7 Jan 6 13:03:40.523: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 Jan 6 13:03:40.546: INFO: kube-apiserver-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:39 +0000 UTC (0+1 container statuses recorded) Jan 6 13:03:40.546: INFO: Container kube-apiserver ready: true, restart count 0 Jan 6 13:03:40.546: INFO: kube-scheduler-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:43 +0000 UTC (0+1 container statuses recorded) Jan 6 13:03:40.546: INFO: Container kube-scheduler ready: true, restart count 12 Jan 6 13:03:40.546: INFO: coredns-5c98db65d4-xx8w8 started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded) Jan 6 13:03:40.546: INFO: Container coredns ready: true, restart count 0 Jan 6 13:03:40.546: INFO: etcd-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:38 +0000 UTC (0+1 container statuses recorded) Jan 6 13:03:40.546: INFO: Container etcd ready: true, restart count 0 Jan 6 13:03:40.546: INFO: weave-net-bzl4d started at 2019-08-04 08:52:37 +0000 UTC (0+2 container statuses recorded) Jan 6 13:03:40.546: INFO: Container weave ready: true, restart count 0 Jan 6 13:03:40.546: INFO: Container weave-npc ready: true, restart count 0 Jan 6 13:03:40.546: INFO: coredns-5c98db65d4-bm4gs started at 2019-08-04 08:53:12 +0000 UTC (0+1 container statuses recorded) Jan 6 13:03:40.546: INFO: Container coredns ready: true, restart count 0 Jan 6 13:03:40.546: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 started at 2019-08-04 08:51:42 +0000 UTC (0+1 container statuses recorded) Jan 6 13:03:40.546: INFO: Container kube-controller-manager ready: true, restart count 18 Jan 6 13:03:40.546: INFO: kube-proxy-58v95 started at 2019-08-04 08:52:37 +0000 UTC (0+1 container statuses recorded) Jan 6 13:03:40.546: INFO: Container kube-proxy ready: true, restart count 0 W0106 13:03:40.553606 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 6 13:03:40.595: INFO: Latency metrics for node iruya-server-sfge57q7djm7 Jan 6 13:03:40.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1076" for this suite. Jan 6 13:04:02.674: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:04:02.785: INFO: namespace statefulset-1076 deletion completed in 22.185226474s • Failure [344.086 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Should recreate evicted statefulset [Conformance] [It] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 6 13:03:27.058: Pod ss-0 expected to be re-created at least once /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:04:02.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-85056166-408d-4772-ae58-57e17aaeb77e STEP: Creating a pod to test consume configMaps Jan 6 13:04:03.079: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4c4524c2-f0cf-441b-981c-143e651846b4" in namespace "projected-6062" to be "success or failure" Jan 6 13:04:03.117: INFO: Pod "pod-projected-configmaps-4c4524c2-f0cf-441b-981c-143e651846b4": Phase="Pending", Reason="", readiness=false. Elapsed: 37.691359ms Jan 6 13:04:05.133: INFO: Pod "pod-projected-configmaps-4c4524c2-f0cf-441b-981c-143e651846b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054297201s Jan 6 13:04:07.140: INFO: Pod "pod-projected-configmaps-4c4524c2-f0cf-441b-981c-143e651846b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060940933s Jan 6 13:04:09.152: INFO: Pod "pod-projected-configmaps-4c4524c2-f0cf-441b-981c-143e651846b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073255273s Jan 6 13:04:11.174: INFO: Pod "pod-projected-configmaps-4c4524c2-f0cf-441b-981c-143e651846b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.095078121s STEP: Saw pod success Jan 6 13:04:11.174: INFO: Pod "pod-projected-configmaps-4c4524c2-f0cf-441b-981c-143e651846b4" satisfied condition "success or failure" Jan 6 13:04:11.182: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-4c4524c2-f0cf-441b-981c-143e651846b4 container projected-configmap-volume-test: STEP: delete the pod Jan 6 13:04:11.317: INFO: Waiting for pod pod-projected-configmaps-4c4524c2-f0cf-441b-981c-143e651846b4 to disappear Jan 6 13:04:11.326: INFO: Pod pod-projected-configmaps-4c4524c2-f0cf-441b-981c-143e651846b4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:04:11.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6062" for this suite. Jan 6 13:04:17.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:04:17.500: INFO: namespace projected-6062 deletion completed in 6.166760154s • [SLOW TEST:14.714 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:04:17.501: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes Jan 6 13:04:17.574: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:04:36.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-497" for this suite. Jan 6 13:04:42.637: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:04:42.720: INFO: namespace pods-497 deletion completed in 6.10777725s • [SLOW TEST:25.219 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:04:42.721: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1456 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 6 13:04:42.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-9913' Jan 6 13:04:43.595: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 6 13:04:43.595: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n" STEP: verifying the rc e2e-test-nginx-rc was created STEP: verifying the pod controlled by rc e2e-test-nginx-rc was created STEP: confirm that you can get logs from an rc Jan 6 13:04:43.775: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-nginx-rc-z5dtb] Jan 6 13:04:43.776: INFO: Waiting up to 5m0s for pod "e2e-test-nginx-rc-z5dtb" in namespace "kubectl-9913" to be "running and ready" Jan 6 13:04:43.809: INFO: Pod "e2e-test-nginx-rc-z5dtb": Phase="Pending", Reason="", readiness=false. Elapsed: 33.166774ms Jan 6 13:04:45.817: INFO: Pod "e2e-test-nginx-rc-z5dtb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040937406s Jan 6 13:04:47.831: INFO: Pod "e2e-test-nginx-rc-z5dtb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055722461s Jan 6 13:04:49.843: INFO: Pod "e2e-test-nginx-rc-z5dtb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066815109s Jan 6 13:04:51.881: INFO: Pod "e2e-test-nginx-rc-z5dtb": Phase="Running", Reason="", readiness=true. Elapsed: 8.105413329s Jan 6 13:04:51.881: INFO: Pod "e2e-test-nginx-rc-z5dtb" satisfied condition "running and ready" Jan 6 13:04:51.881: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-nginx-rc-z5dtb] Jan 6 13:04:51.882: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-nginx-rc --namespace=kubectl-9913' Jan 6 13:04:52.130: INFO: stderr: "" Jan 6 13:04:52.130: INFO: stdout: "" [AfterEach] [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 Jan 6 13:04:52.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-9913' Jan 6 13:04:52.344: INFO: stderr: "" Jan 6 13:04:52.344: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:04:52.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9913" for this suite. Jan 6 13:05:14.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:05:14.963: INFO: namespace kubectl-9913 deletion completed in 22.613655221s • [SLOW TEST:32.242 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:05:14.963: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-8d9faa85-a438-4374-a539-5d752eb9730a in namespace container-probe-4853 Jan 6 13:05:23.108: INFO: Started pod busybox-8d9faa85-a438-4374-a539-5d752eb9730a in namespace container-probe-4853 STEP: checking the pod's current state and verifying that restartCount is present Jan 6 13:05:23.112: INFO: Initial restart count of pod busybox-8d9faa85-a438-4374-a539-5d752eb9730a is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:09:23.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4853" for this suite. Jan 6 13:09:29.865: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:09:30.005: INFO: namespace container-probe-4853 deletion completed in 6.188194869s • [SLOW TEST:255.042 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:09:30.005: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-volume-map-fa647c0d-0913-4530-8a54-8112827abe96 STEP: Creating a pod to test consume configMaps Jan 6 13:09:30.145: INFO: Waiting up to 5m0s for pod "pod-configmaps-0a3411e5-b664-4dce-ab48-4d5cde63e75d" in namespace "configmap-9608" to be "success or failure" Jan 6 13:09:30.160: INFO: Pod "pod-configmaps-0a3411e5-b664-4dce-ab48-4d5cde63e75d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.69532ms Jan 6 13:09:32.173: INFO: Pod "pod-configmaps-0a3411e5-b664-4dce-ab48-4d5cde63e75d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027443118s Jan 6 13:09:34.185: INFO: Pod "pod-configmaps-0a3411e5-b664-4dce-ab48-4d5cde63e75d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03972577s Jan 6 13:09:36.197: INFO: Pod "pod-configmaps-0a3411e5-b664-4dce-ab48-4d5cde63e75d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051873274s Jan 6 13:09:38.207: INFO: Pod "pod-configmaps-0a3411e5-b664-4dce-ab48-4d5cde63e75d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061824063s STEP: Saw pod success Jan 6 13:09:38.207: INFO: Pod "pod-configmaps-0a3411e5-b664-4dce-ab48-4d5cde63e75d" satisfied condition "success or failure" Jan 6 13:09:38.213: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0a3411e5-b664-4dce-ab48-4d5cde63e75d container configmap-volume-test: STEP: delete the pod Jan 6 13:09:38.304: INFO: Waiting for pod pod-configmaps-0a3411e5-b664-4dce-ab48-4d5cde63e75d to disappear Jan 6 13:09:38.317: INFO: Pod pod-configmaps-0a3411e5-b664-4dce-ab48-4d5cde63e75d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:09:38.317: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9608" for this suite. Jan 6 13:09:44.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:09:44.517: INFO: namespace configmap-9608 deletion completed in 6.147569215s • [SLOW TEST:14.511 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:09:44.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-553d4d6a-4591-41e7-a872-3873c0f04fba STEP: Creating a pod to test consume secrets Jan 6 13:09:44.661: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-1759aee5-0edb-40b0-bcad-f36d2c6b2925" in namespace "projected-368" to be "success or failure" Jan 6 13:09:44.669: INFO: Pod "pod-projected-secrets-1759aee5-0edb-40b0-bcad-f36d2c6b2925": Phase="Pending", Reason="", readiness=false. Elapsed: 8.184ms Jan 6 13:09:46.676: INFO: Pod "pod-projected-secrets-1759aee5-0edb-40b0-bcad-f36d2c6b2925": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014766446s Jan 6 13:09:48.689: INFO: Pod "pod-projected-secrets-1759aee5-0edb-40b0-bcad-f36d2c6b2925": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028043458s Jan 6 13:09:50.707: INFO: Pod "pod-projected-secrets-1759aee5-0edb-40b0-bcad-f36d2c6b2925": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046522371s Jan 6 13:09:52.722: INFO: Pod "pod-projected-secrets-1759aee5-0edb-40b0-bcad-f36d2c6b2925": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061637173s STEP: Saw pod success Jan 6 13:09:52.723: INFO: Pod "pod-projected-secrets-1759aee5-0edb-40b0-bcad-f36d2c6b2925" satisfied condition "success or failure" Jan 6 13:09:52.727: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-1759aee5-0edb-40b0-bcad-f36d2c6b2925 container projected-secret-volume-test: STEP: delete the pod Jan 6 13:09:52.846: INFO: Waiting for pod pod-projected-secrets-1759aee5-0edb-40b0-bcad-f36d2c6b2925 to disappear Jan 6 13:09:52.888: INFO: Pod pod-projected-secrets-1759aee5-0edb-40b0-bcad-f36d2c6b2925 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:09:52.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-368" for this suite. Jan 6 13:09:58.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:09:59.096: INFO: namespace projected-368 deletion completed in 6.187374936s • [SLOW TEST:14.578 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:09:59.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 6 13:10:07.780: INFO: Successfully updated pod "annotationupdate6c56ea52-02a8-4b5e-8876-342372e48fdd" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:10:09.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7160" for this suite. Jan 6 13:10:31.924: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:10:32.019: INFO: namespace downward-api-7160 deletion completed in 22.124570497s • [SLOW TEST:32.923 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:10:32.019: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81 Jan 6 13:10:32.173: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 6 13:10:32.193: INFO: Waiting for terminating namespaces to be deleted... Jan 6 13:10:32.199: INFO: Logging pods the kubelet thinks is on node iruya-node before test Jan 6 13:10:32.254: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded) Jan 6 13:10:32.254: INFO: Container kube-proxy ready: true, restart count 0 Jan 6 13:10:32.254: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded) Jan 6 13:10:32.254: INFO: Container weave ready: true, restart count 0 Jan 6 13:10:32.254: INFO: Container weave-npc ready: true, restart count 0 Jan 6 13:10:32.254: INFO: Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test Jan 6 13:10:32.269: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded) Jan 6 13:10:32.269: INFO: Container kube-controller-manager ready: true, restart count 18 Jan 6 13:10:32.269: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded) Jan 6 13:10:32.269: INFO: Container kube-proxy ready: true, restart count 0 Jan 6 13:10:32.269: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded) Jan 6 13:10:32.269: INFO: Container kube-apiserver ready: true, restart count 0 Jan 6 13:10:32.270: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded) Jan 6 13:10:32.270: INFO: Container kube-scheduler ready: true, restart count 12 Jan 6 13:10:32.270: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 6 13:10:32.270: INFO: Container coredns ready: true, restart count 0 Jan 6 13:10:32.270: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded) Jan 6 13:10:32.270: INFO: Container coredns ready: true, restart count 0 Jan 6 13:10:32.270: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded) Jan 6 13:10:32.270: INFO: Container etcd ready: true, restart count 0 Jan 6 13:10:32.270: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded) Jan 6 13:10:32.270: INFO: Container weave ready: true, restart count 0 Jan 6 13:10:32.270: INFO: Container weave-npc ready: true, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-82ae1b97-22b3-40b4-a885-90fbaa15f4b6 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-82ae1b97-22b3-40b4-a885-90fbaa15f4b6 off the node iruya-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-82ae1b97-22b3-40b4-a885-90fbaa15f4b6 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:10:50.667: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7322" for this suite. Jan 6 13:11:04.733: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:11:04.897: INFO: namespace sched-pred-7322 deletion completed in 14.224644311s [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72 • [SLOW TEST:32.878 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:11:04.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification Jan 6 13:11:05.013: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4572,SelfLink:/api/v1/namespaces/watch-4572/configmaps/e2e-watch-test-configmap-a,UID:03d4df29-c708-4de7-bb54-5d227c2cc7af,ResourceVersion:19521283,Generation:0,CreationTimestamp:2020-01-06 13:11:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 6 13:11:05.013: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4572,SelfLink:/api/v1/namespaces/watch-4572/configmaps/e2e-watch-test-configmap-a,UID:03d4df29-c708-4de7-bb54-5d227c2cc7af,ResourceVersion:19521283,Generation:0,CreationTimestamp:2020-01-06 13:11:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification Jan 6 13:11:15.035: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4572,SelfLink:/api/v1/namespaces/watch-4572/configmaps/e2e-watch-test-configmap-a,UID:03d4df29-c708-4de7-bb54-5d227c2cc7af,ResourceVersion:19521297,Generation:0,CreationTimestamp:2020-01-06 13:11:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} Jan 6 13:11:15.036: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4572,SelfLink:/api/v1/namespaces/watch-4572/configmaps/e2e-watch-test-configmap-a,UID:03d4df29-c708-4de7-bb54-5d227c2cc7af,ResourceVersion:19521297,Generation:0,CreationTimestamp:2020-01-06 13:11:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification Jan 6 13:11:25.052: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4572,SelfLink:/api/v1/namespaces/watch-4572/configmaps/e2e-watch-test-configmap-a,UID:03d4df29-c708-4de7-bb54-5d227c2cc7af,ResourceVersion:19521311,Generation:0,CreationTimestamp:2020-01-06 13:11:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 6 13:11:25.052: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4572,SelfLink:/api/v1/namespaces/watch-4572/configmaps/e2e-watch-test-configmap-a,UID:03d4df29-c708-4de7-bb54-5d227c2cc7af,ResourceVersion:19521311,Generation:0,CreationTimestamp:2020-01-06 13:11:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification Jan 6 13:11:35.076: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4572,SelfLink:/api/v1/namespaces/watch-4572/configmaps/e2e-watch-test-configmap-a,UID:03d4df29-c708-4de7-bb54-5d227c2cc7af,ResourceVersion:19521326,Generation:0,CreationTimestamp:2020-01-06 13:11:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} Jan 6 13:11:35.076: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-a,GenerateName:,Namespace:watch-4572,SelfLink:/api/v1/namespaces/watch-4572/configmaps/e2e-watch-test-configmap-a,UID:03d4df29-c708-4de7-bb54-5d227c2cc7af,ResourceVersion:19521326,Generation:0,CreationTimestamp:2020-01-06 13:11:05 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-A,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification Jan 6 13:11:45.095: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4572,SelfLink:/api/v1/namespaces/watch-4572/configmaps/e2e-watch-test-configmap-b,UID:238c4e6f-0fda-4265-a2eb-1b31654e2728,ResourceVersion:19521341,Generation:0,CreationTimestamp:2020-01-06 13:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 6 13:11:45.096: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4572,SelfLink:/api/v1/namespaces/watch-4572/configmaps/e2e-watch-test-configmap-b,UID:238c4e6f-0fda-4265-a2eb-1b31654e2728,ResourceVersion:19521341,Generation:0,CreationTimestamp:2020-01-06 13:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification Jan 6 13:11:55.112: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4572,SelfLink:/api/v1/namespaces/watch-4572/configmaps/e2e-watch-test-configmap-b,UID:238c4e6f-0fda-4265-a2eb-1b31654e2728,ResourceVersion:19521355,Generation:0,CreationTimestamp:2020-01-06 13:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} Jan 6 13:11:55.112: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-configmap-b,GenerateName:,Namespace:watch-4572,SelfLink:/api/v1/namespaces/watch-4572/configmaps/e2e-watch-test-configmap-b,UID:238c4e6f-0fda-4265-a2eb-1b31654e2728,ResourceVersion:19521355,Generation:0,CreationTimestamp:2020-01-06 13:11:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: multiple-watchers-B,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:12:05.113: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4572" for this suite. Jan 6 13:12:11.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:12:11.379: INFO: namespace watch-4572 deletion completed in 6.255850706s • [SLOW TEST:66.481 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:12:11.379: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod test-webserver-c96d1480-4de5-4b76-8d05-9f51e7cd46c0 in namespace container-probe-5854 Jan 6 13:12:19.534: INFO: Started pod test-webserver-c96d1480-4de5-4b76-8d05-9f51e7cd46c0 in namespace container-probe-5854 STEP: checking the pod's current state and verifying that restartCount is present Jan 6 13:12:19.541: INFO: Initial restart count of pod test-webserver-c96d1480-4de5-4b76-8d05-9f51e7cd46c0 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:16:21.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5854" for this suite. Jan 6 13:16:27.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:16:27.713: INFO: namespace container-probe-5854 deletion completed in 6.178533892s • [SLOW TEST:256.334 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:16:27.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod busybox-1f887184-86c3-4010-9cc0-6a2147bf6c4b in namespace container-probe-22 Jan 6 13:16:35.850: INFO: Started pod busybox-1f887184-86c3-4010-9cc0-6a2147bf6c4b in namespace container-probe-22 STEP: checking the pod's current state and verifying that restartCount is present Jan 6 13:16:35.859: INFO: Initial restart count of pod busybox-1f887184-86c3-4010-9cc0-6a2147bf6c4b is 0 Jan 6 13:17:32.167: INFO: Restart count of pod container-probe-22/busybox-1f887184-86c3-4010-9cc0-6a2147bf6c4b is now 1 (56.308096417s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:17:32.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-22" for this suite. Jan 6 13:17:38.341: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:17:38.497: INFO: namespace container-probe-22 deletion completed in 6.19072715s • [SLOW TEST:70.784 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:17:38.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap configmap-5115/configmap-test-1e1c426b-e136-4346-aa96-e0c88d97cbce STEP: Creating a pod to test consume configMaps Jan 6 13:17:38.700: INFO: Waiting up to 5m0s for pod "pod-configmaps-6f391fca-7e7e-405a-9210-87763ae2840e" in namespace "configmap-5115" to be "success or failure" Jan 6 13:17:38.706: INFO: Pod "pod-configmaps-6f391fca-7e7e-405a-9210-87763ae2840e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.570484ms Jan 6 13:17:40.746: INFO: Pod "pod-configmaps-6f391fca-7e7e-405a-9210-87763ae2840e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045788204s Jan 6 13:17:42.753: INFO: Pod "pod-configmaps-6f391fca-7e7e-405a-9210-87763ae2840e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053296818s Jan 6 13:17:44.772: INFO: Pod "pod-configmaps-6f391fca-7e7e-405a-9210-87763ae2840e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07209763s Jan 6 13:17:46.785: INFO: Pod "pod-configmaps-6f391fca-7e7e-405a-9210-87763ae2840e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.084589336s STEP: Saw pod success Jan 6 13:17:46.785: INFO: Pod "pod-configmaps-6f391fca-7e7e-405a-9210-87763ae2840e" satisfied condition "success or failure" Jan 6 13:17:46.791: INFO: Trying to get logs from node iruya-node pod pod-configmaps-6f391fca-7e7e-405a-9210-87763ae2840e container env-test: STEP: delete the pod Jan 6 13:17:46.954: INFO: Waiting for pod pod-configmaps-6f391fca-7e7e-405a-9210-87763ae2840e to disappear Jan 6 13:17:46.962: INFO: Pod pod-configmaps-6f391fca-7e7e-405a-9210-87763ae2840e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:17:46.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5115" for this suite. Jan 6 13:17:52.987: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:17:53.090: INFO: namespace configmap-5115 deletion completed in 6.123596566s • [SLOW TEST:14.591 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:17:53.090: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:18:18.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8158" for this suite. Jan 6 13:18:24.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:18:24.657: INFO: namespace namespaces-8158 deletion completed in 6.250167014s STEP: Destroying namespace "nsdeletetest-8096" for this suite. Jan 6 13:18:24.660: INFO: Namespace nsdeletetest-8096 was already deleted STEP: Destroying namespace "nsdeletetest-116" for this suite. Jan 6 13:18:30.684: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:18:30.829: INFO: namespace nsdeletetest-116 deletion completed in 6.168985341s • [SLOW TEST:37.739 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:18:30.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 6 13:18:30.979: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d52a77c0-df8c-4676-a60d-61e823312c0e" in namespace "projected-1098" to be "success or failure" Jan 6 13:18:30.984: INFO: Pod "downwardapi-volume-d52a77c0-df8c-4676-a60d-61e823312c0e": Phase="Pending", Reason="", readiness=false. Elapsed: 5.393552ms Jan 6 13:18:32.992: INFO: Pod "downwardapi-volume-d52a77c0-df8c-4676-a60d-61e823312c0e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012670811s Jan 6 13:18:35.033: INFO: Pod "downwardapi-volume-d52a77c0-df8c-4676-a60d-61e823312c0e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05374489s Jan 6 13:18:37.041: INFO: Pod "downwardapi-volume-d52a77c0-df8c-4676-a60d-61e823312c0e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.062320816s Jan 6 13:18:39.054: INFO: Pod "downwardapi-volume-d52a77c0-df8c-4676-a60d-61e823312c0e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074993861s STEP: Saw pod success Jan 6 13:18:39.054: INFO: Pod "downwardapi-volume-d52a77c0-df8c-4676-a60d-61e823312c0e" satisfied condition "success or failure" Jan 6 13:18:39.063: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-d52a77c0-df8c-4676-a60d-61e823312c0e container client-container: STEP: delete the pod Jan 6 13:18:39.181: INFO: Waiting for pod downwardapi-volume-d52a77c0-df8c-4676-a60d-61e823312c0e to disappear Jan 6 13:18:39.192: INFO: Pod downwardapi-volume-d52a77c0-df8c-4676-a60d-61e823312c0e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:18:39.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1098" for this suite. Jan 6 13:18:45.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:18:45.321: INFO: namespace projected-1098 deletion completed in 6.120331613s • [SLOW TEST:14.491 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:18:45.321: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating the pod Jan 6 13:18:45.374: INFO: PodSpec: initContainers in spec.initContainers Jan 6 13:19:48.389: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-a895fd76-d9cf-4010-8c25-523929238149", GenerateName:"", Namespace:"init-container-8504", SelfLink:"/api/v1/namespaces/init-container-8504/pods/pod-init-a895fd76-d9cf-4010-8c25-523929238149", UID:"449a56fe-f122-449a-ae53-381439380ac6", ResourceVersion:"19522167", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63713913525, loc:(*time.Location)(0x7ea48a0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"374401160"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-bh4mn", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc00197e000), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bh4mn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bh4mn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-bh4mn", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0026d0288), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"iruya-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002f88000), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0026d0380)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0026d0490)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0026d0498), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0026d049c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913525, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913525, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913525, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913525, loc:(*time.Location)(0x7ea48a0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.3.65", PodIP:"10.44.0.1", StartTime:(*v1.Time)(0xc00153a180), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0024a6070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0024a60e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://ab271da0b8cc3276b69e71bad4e46dd94c9c863cb05916db6ea53eca861de4f7"}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00153a220), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:""}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00153a1e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:""}}, QOSClass:"Guaranteed"}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:19:48.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8504" for this suite. Jan 6 13:20:10.480: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:20:10.644: INFO: namespace init-container-8504 deletion completed in 22.195468952s • [SLOW TEST:85.323 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:20:10.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating secret secrets-4580/secret-test-502604cc-606c-4f61-b6d4-c1d19ea20c25 STEP: Creating a pod to test consume secrets Jan 6 13:20:10.738: INFO: Waiting up to 5m0s for pod "pod-configmaps-1f904d80-82bf-4f5b-8d75-942c1777af0b" in namespace "secrets-4580" to be "success or failure" Jan 6 13:20:10.753: INFO: Pod "pod-configmaps-1f904d80-82bf-4f5b-8d75-942c1777af0b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.803409ms Jan 6 13:20:12.761: INFO: Pod "pod-configmaps-1f904d80-82bf-4f5b-8d75-942c1777af0b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023853845s Jan 6 13:20:14.788: INFO: Pod "pod-configmaps-1f904d80-82bf-4f5b-8d75-942c1777af0b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050386675s Jan 6 13:20:16.799: INFO: Pod "pod-configmaps-1f904d80-82bf-4f5b-8d75-942c1777af0b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061629432s Jan 6 13:20:18.807: INFO: Pod "pod-configmaps-1f904d80-82bf-4f5b-8d75-942c1777af0b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.069719782s STEP: Saw pod success Jan 6 13:20:18.807: INFO: Pod "pod-configmaps-1f904d80-82bf-4f5b-8d75-942c1777af0b" satisfied condition "success or failure" Jan 6 13:20:18.813: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1f904d80-82bf-4f5b-8d75-942c1777af0b container env-test: STEP: delete the pod Jan 6 13:20:18.892: INFO: Waiting for pod pod-configmaps-1f904d80-82bf-4f5b-8d75-942c1777af0b to disappear Jan 6 13:20:18.909: INFO: Pod pod-configmaps-1f904d80-82bf-4f5b-8d75-942c1777af0b no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:20:18.909: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4580" for this suite. Jan 6 13:20:24.946: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:20:25.057: INFO: namespace secrets-4580 deletion completed in 6.141753839s • [SLOW TEST:14.413 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:20:25.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 6 13:20:25.146: INFO: Waiting up to 5m0s for pod "pod-26fbf5fe-96a6-4c6f-b155-43c7ad2af0d6" in namespace "emptydir-2230" to be "success or failure" Jan 6 13:20:25.171: INFO: Pod "pod-26fbf5fe-96a6-4c6f-b155-43c7ad2af0d6": Phase="Pending", Reason="", readiness=false. Elapsed: 25.180121ms Jan 6 13:20:27.179: INFO: Pod "pod-26fbf5fe-96a6-4c6f-b155-43c7ad2af0d6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033263774s Jan 6 13:20:29.192: INFO: Pod "pod-26fbf5fe-96a6-4c6f-b155-43c7ad2af0d6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045619216s Jan 6 13:20:31.204: INFO: Pod "pod-26fbf5fe-96a6-4c6f-b155-43c7ad2af0d6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05768462s Jan 6 13:20:33.212: INFO: Pod "pod-26fbf5fe-96a6-4c6f-b155-43c7ad2af0d6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.065893033s Jan 6 13:20:35.219: INFO: Pod "pod-26fbf5fe-96a6-4c6f-b155-43c7ad2af0d6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073539075s STEP: Saw pod success Jan 6 13:20:35.220: INFO: Pod "pod-26fbf5fe-96a6-4c6f-b155-43c7ad2af0d6" satisfied condition "success or failure" Jan 6 13:20:35.223: INFO: Trying to get logs from node iruya-node pod pod-26fbf5fe-96a6-4c6f-b155-43c7ad2af0d6 container test-container: STEP: delete the pod Jan 6 13:20:35.295: INFO: Waiting for pod pod-26fbf5fe-96a6-4c6f-b155-43c7ad2af0d6 to disappear Jan 6 13:20:35.375: INFO: Pod pod-26fbf5fe-96a6-4c6f-b155-43c7ad2af0d6 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:20:35.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2230" for this suite. Jan 6 13:20:41.446: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:20:41.581: INFO: namespace emptydir-2230 deletion completed in 6.191231954s • [SLOW TEST:16.524 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:20:41.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:21:30.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2869" for this suite. Jan 6 13:21:36.945: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:21:37.041: INFO: namespace container-runtime-2869 deletion completed in 6.142257789s • [SLOW TEST:55.459 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:21:37.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-map-9d9f258b-c3e1-4571-a253-6d7e9863ffe6 STEP: Creating a pod to test consume configMaps Jan 6 13:21:37.175: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e4e947f3-e038-4837-9109-d5aec1894782" in namespace "projected-3117" to be "success or failure" Jan 6 13:21:37.214: INFO: Pod "pod-projected-configmaps-e4e947f3-e038-4837-9109-d5aec1894782": Phase="Pending", Reason="", readiness=false. Elapsed: 38.694755ms Jan 6 13:21:39.226: INFO: Pod "pod-projected-configmaps-e4e947f3-e038-4837-9109-d5aec1894782": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050835422s Jan 6 13:21:41.239: INFO: Pod "pod-projected-configmaps-e4e947f3-e038-4837-9109-d5aec1894782": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063713688s Jan 6 13:21:43.246: INFO: Pod "pod-projected-configmaps-e4e947f3-e038-4837-9109-d5aec1894782": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071270657s Jan 6 13:21:45.255: INFO: Pod "pod-projected-configmaps-e4e947f3-e038-4837-9109-d5aec1894782": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.080366894s STEP: Saw pod success Jan 6 13:21:45.255: INFO: Pod "pod-projected-configmaps-e4e947f3-e038-4837-9109-d5aec1894782" satisfied condition "success or failure" Jan 6 13:21:45.260: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-e4e947f3-e038-4837-9109-d5aec1894782 container projected-configmap-volume-test: STEP: delete the pod Jan 6 13:21:45.314: INFO: Waiting for pod pod-projected-configmaps-e4e947f3-e038-4837-9109-d5aec1894782 to disappear Jan 6 13:21:45.321: INFO: Pod pod-projected-configmaps-e4e947f3-e038-4837-9109-d5aec1894782 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:21:45.322: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3117" for this suite. Jan 6 13:21:51.354: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:21:51.471: INFO: namespace projected-3117 deletion completed in 6.143953457s • [SLOW TEST:14.430 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:21:51.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1685 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 6 13:21:51.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-6539' Jan 6 13:21:53.417: INFO: stderr: "" Jan 6 13:21:53.417: INFO: stdout: "pod/e2e-test-nginx-pod created\n" STEP: verifying the pod e2e-test-nginx-pod was created [AfterEach] [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1690 Jan 6 13:21:53.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-6539' Jan 6 13:21:58.101: INFO: stderr: "" Jan 6 13:21:58.101: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:21:58.102: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6539" for this suite. Jan 6 13:22:04.181: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:22:04.341: INFO: namespace kubectl-6539 deletion completed in 6.214106687s • [SLOW TEST:12.870 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:22:04.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook Jan 6 13:22:20.550: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 6 13:22:20.583: INFO: Pod pod-with-prestop-exec-hook still exists Jan 6 13:22:22.584: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 6 13:22:22.601: INFO: Pod pod-with-prestop-exec-hook still exists Jan 6 13:22:24.584: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 6 13:22:24.596: INFO: Pod pod-with-prestop-exec-hook still exists Jan 6 13:22:26.584: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 6 13:22:26.598: INFO: Pod pod-with-prestop-exec-hook still exists Jan 6 13:22:28.584: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 6 13:22:28.597: INFO: Pod pod-with-prestop-exec-hook still exists Jan 6 13:22:30.584: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 6 13:22:30.605: INFO: Pod pod-with-prestop-exec-hook still exists Jan 6 13:22:32.584: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 6 13:22:32.611: INFO: Pod pod-with-prestop-exec-hook still exists Jan 6 13:22:34.584: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 6 13:22:34.627: INFO: Pod pod-with-prestop-exec-hook still exists Jan 6 13:22:36.584: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 6 13:22:36.597: INFO: Pod pod-with-prestop-exec-hook still exists Jan 6 13:22:38.584: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 6 13:22:38.590: INFO: Pod pod-with-prestop-exec-hook still exists Jan 6 13:22:40.584: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 6 13:22:40.592: INFO: Pod pod-with-prestop-exec-hook still exists Jan 6 13:22:42.584: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 6 13:22:42.719: INFO: Pod pod-with-prestop-exec-hook still exists Jan 6 13:22:44.584: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 6 13:22:44.600: INFO: Pod pod-with-prestop-exec-hook still exists Jan 6 13:22:46.584: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear Jan 6 13:22:46.645: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:22:46.660: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3848" for this suite. Jan 6 13:23:08.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:23:08.792: INFO: namespace container-lifecycle-hook-3848 deletion completed in 22.127686402s • [SLOW TEST:64.450 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:23:08.793: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating projection with secret that has name projected-secret-test-bfbd38b8-7f00-4fa0-a164-c0ff1f779b7e STEP: Creating a pod to test consume secrets Jan 6 13:23:08.890: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-2a95f593-f8ab-4417-83f6-49eeb47d0706" in namespace "projected-8538" to be "success or failure" Jan 6 13:23:08.928: INFO: Pod "pod-projected-secrets-2a95f593-f8ab-4417-83f6-49eeb47d0706": Phase="Pending", Reason="", readiness=false. Elapsed: 37.367107ms Jan 6 13:23:10.937: INFO: Pod "pod-projected-secrets-2a95f593-f8ab-4417-83f6-49eeb47d0706": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046974038s Jan 6 13:23:12.947: INFO: Pod "pod-projected-secrets-2a95f593-f8ab-4417-83f6-49eeb47d0706": Phase="Pending", Reason="", readiness=false. Elapsed: 4.056296187s Jan 6 13:23:14.960: INFO: Pod "pod-projected-secrets-2a95f593-f8ab-4417-83f6-49eeb47d0706": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069848584s Jan 6 13:23:16.976: INFO: Pod "pod-projected-secrets-2a95f593-f8ab-4417-83f6-49eeb47d0706": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085419667s STEP: Saw pod success Jan 6 13:23:16.976: INFO: Pod "pod-projected-secrets-2a95f593-f8ab-4417-83f6-49eeb47d0706" satisfied condition "success or failure" Jan 6 13:23:16.984: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-2a95f593-f8ab-4417-83f6-49eeb47d0706 container projected-secret-volume-test: STEP: delete the pod Jan 6 13:23:17.085: INFO: Waiting for pod pod-projected-secrets-2a95f593-f8ab-4417-83f6-49eeb47d0706 to disappear Jan 6 13:23:17.095: INFO: Pod pod-projected-secrets-2a95f593-f8ab-4417-83f6-49eeb47d0706 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:23:17.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8538" for this suite. Jan 6 13:23:23.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:23:23.285: INFO: namespace projected-8538 deletion completed in 6.186054264s • [SLOW TEST:14.493 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:23:23.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 6 13:23:23.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9182' Jan 6 13:23:23.966: INFO: stderr: "" Jan 6 13:23:23.966: INFO: stdout: "replicationcontroller/redis-master created\n" Jan 6 13:23:23.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9182' Jan 6 13:23:24.471: INFO: stderr: "" Jan 6 13:23:24.471: INFO: stdout: "service/redis-master created\n" STEP: Waiting for Redis master to start. Jan 6 13:23:25.482: INFO: Selector matched 1 pods for map[app:redis] Jan 6 13:23:25.482: INFO: Found 0 / 1 Jan 6 13:23:26.489: INFO: Selector matched 1 pods for map[app:redis] Jan 6 13:23:26.490: INFO: Found 0 / 1 Jan 6 13:23:27.495: INFO: Selector matched 1 pods for map[app:redis] Jan 6 13:23:27.495: INFO: Found 0 / 1 Jan 6 13:23:28.493: INFO: Selector matched 1 pods for map[app:redis] Jan 6 13:23:28.493: INFO: Found 0 / 1 Jan 6 13:23:29.479: INFO: Selector matched 1 pods for map[app:redis] Jan 6 13:23:29.479: INFO: Found 0 / 1 Jan 6 13:23:30.488: INFO: Selector matched 1 pods for map[app:redis] Jan 6 13:23:30.488: INFO: Found 0 / 1 Jan 6 13:23:31.482: INFO: Selector matched 1 pods for map[app:redis] Jan 6 13:23:31.482: INFO: Found 1 / 1 Jan 6 13:23:31.482: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 Jan 6 13:23:31.487: INFO: Selector matched 1 pods for map[app:redis] Jan 6 13:23:31.487: INFO: ForEach: Found 1 pods from the filter. Now looping through them. Jan 6 13:23:31.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-gbq2h --namespace=kubectl-9182' Jan 6 13:23:31.734: INFO: stderr: "" Jan 6 13:23:31.734: INFO: stdout: "Name: redis-master-gbq2h\nNamespace: kubectl-9182\nPriority: 0\nNode: iruya-node/10.96.3.65\nStart Time: Mon, 06 Jan 2020 13:23:24 +0000\nLabels: app=redis\n role=master\nAnnotations: \nStatus: Running\nIP: 10.44.0.1\nControlled By: ReplicationController/redis-master\nContainers:\n redis-master:\n Container ID: docker://637ecacdbe5ecdd8f97b98a02cb21647f5242e90e1ec1a8a5903845e6341155e\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Image ID: docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Mon, 06 Jan 2020 13:23:29 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-8djtn (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-8djtn:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-8djtn\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled 7s default-scheduler Successfully assigned kubectl-9182/redis-master-gbq2h to iruya-node\n Normal Pulled 4s kubelet, iruya-node Container image \"gcr.io/kubernetes-e2e-test-images/redis:1.0\" already present on machine\n Normal Created 2s kubelet, iruya-node Created container redis-master\n Normal Started 2s kubelet, iruya-node Started container redis-master\n" Jan 6 13:23:31.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-9182' Jan 6 13:23:31.942: INFO: stderr: "" Jan 6 13:23:31.943: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9182\nSelector: app=redis,role=master\nLabels: app=redis\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=redis\n role=master\n Containers:\n redis-master:\n Image: gcr.io/kubernetes-e2e-test-images/redis:1.0\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 7s replication-controller Created pod: redis-master-gbq2h\n" Jan 6 13:23:31.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-9182' Jan 6 13:23:32.069: INFO: stderr: "" Jan 6 13:23:32.069: INFO: stdout: "Name: redis-master\nNamespace: kubectl-9182\nLabels: app=redis\n role=master\nAnnotations: \nSelector: app=redis,role=master\nType: ClusterIP\nIP: 10.108.207.221\nPort: 6379/TCP\nTargetPort: redis-server/TCP\nEndpoints: 10.44.0.1:6379\nSession Affinity: None\nEvents: \n" Jan 6 13:23:32.075: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node iruya-node' Jan 6 13:23:32.246: INFO: stderr: "" Jan 6 13:23:32.246: INFO: stdout: "Name: iruya-node\nRoles: \nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=iruya-node\n kubernetes.io/os=linux\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 04 Aug 2019 09:01:39 +0000\nTaints: \nUnschedulable: false\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n NetworkUnavailable False Sat, 12 Oct 2019 11:56:49 +0000 Sat, 12 Oct 2019 11:56:49 +0000 WeaveIsUp Weave pod has set this\n MemoryPressure False Mon, 06 Jan 2020 13:23:06 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Mon, 06 Jan 2020 13:23:06 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Mon, 06 Jan 2020 13:23:06 +0000 Sun, 04 Aug 2019 09:01:39 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Mon, 06 Jan 2020 13:23:06 +0000 Sun, 04 Aug 2019 09:02:19 +0000 KubeletReady kubelet is posting ready status. AppArmor enabled\nAddresses:\n InternalIP: 10.96.3.65\n Hostname: iruya-node\nCapacity:\n cpu: 4\n ephemeral-storage: 20145724Ki\n hugepages-2Mi: 0\n memory: 4039076Ki\n pods: 110\nAllocatable:\n cpu: 4\n ephemeral-storage: 18566299208\n hugepages-2Mi: 0\n memory: 3936676Ki\n pods: 110\nSystem Info:\n Machine ID: f573dcf04d6f4a87856a35d266a2fa7a\n System UUID: F573DCF0-4D6F-4A87-856A-35D266A2FA7A\n Boot ID: 8baf4beb-8391-43e6-b17b-b1e184b5370a\n Kernel Version: 4.15.0-52-generic\n OS Image: Ubuntu 18.04.2 LTS\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: docker://18.9.7\n Kubelet Version: v1.15.1\n Kube-Proxy Version: v1.15.1\nPodCIDR: 10.96.1.0/24\nNon-terminated Pods: (3 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system kube-proxy-976zl 0 (0%) 0 (0%) 0 (0%) 0 (0%) 155d\n kube-system weave-net-rlp57 20m (0%) 0 (0%) 0 (0%) 0 (0%) 86d\n kubectl-9182 redis-master-gbq2h 0 (0%) 0 (0%) 0 (0%) 0 (0%) 9s\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 20m (0%) 0 (0%)\n memory 0 (0%) 0 (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" Jan 6 13:23:32.247: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-9182' Jan 6 13:23:32.377: INFO: stderr: "" Jan 6 13:23:32.377: INFO: stdout: "Name: kubectl-9182\nLabels: e2e-framework=kubectl\n e2e-run=ce9e7ac3-6b27-499d-a41e-0068bec19cab\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo resource limits.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:23:32.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9182" for this suite. Jan 6 13:23:54.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:23:54.611: INFO: namespace kubectl-9182 deletion completed in 22.227210162s • [SLOW TEST:31.326 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:23:54.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:24:02.791: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-8445" for this suite. Jan 6 13:24:44.829: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:24:44.969: INFO: namespace kubelet-test-8445 deletion completed in 42.171251768s • [SLOW TEST:50.357 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 when scheduling a busybox command in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40 should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ S ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:24:44.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 6 13:24:45.110: INFO: Pod name rollover-pod: Found 0 pods out of 1 Jan 6 13:24:50.121: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running Jan 6 13:24:52.138: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready Jan 6 13:24:54.144: INFO: Creating deployment "test-rollover-deployment" Jan 6 13:24:54.159: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations Jan 6 13:24:56.173: INFO: Check revision of new replica set for deployment "test-rollover-deployment" Jan 6 13:24:56.179: INFO: Ensure that both replica sets have 1 created replica Jan 6 13:24:56.183: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update Jan 6 13:24:56.190: INFO: Updating deployment test-rollover-deployment Jan 6 13:24:56.190: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller Jan 6 13:24:58.206: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 Jan 6 13:24:58.214: INFO: Make sure deployment "test-rollover-deployment" is complete Jan 6 13:24:58.223: INFO: all replica sets need to contain the pod-template-hash label Jan 6 13:24:58.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913896, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 13:25:00.237: INFO: all replica sets need to contain the pod-template-hash label Jan 6 13:25:00.237: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913896, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 13:25:02.238: INFO: all replica sets need to contain the pod-template-hash label Jan 6 13:25:02.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913896, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 13:25:04.237: INFO: all replica sets need to contain the pod-template-hash label Jan 6 13:25:04.237: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913896, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 13:25:06.272: INFO: all replica sets need to contain the pod-template-hash label Jan 6 13:25:06.273: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913904, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 13:25:08.262: INFO: all replica sets need to contain the pod-template-hash label Jan 6 13:25:08.262: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913904, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 13:25:10.239: INFO: all replica sets need to contain the pod-template-hash label Jan 6 13:25:10.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913904, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 13:25:12.241: INFO: all replica sets need to contain the pod-template-hash label Jan 6 13:25:12.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913904, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 13:25:14.236: INFO: all replica sets need to contain the pod-template-hash label Jan 6 13:25:14.236: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913904, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713913894, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-854595fc44\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 13:25:16.240: INFO: Jan 6 13:25:16.240: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 6 13:25:16.254: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment,GenerateName:,Namespace:deployment-3362,SelfLink:/apis/apps/v1/namespaces/deployment-3362/deployments/test-rollover-deployment,UID:1a3bdd65-f419-4898-9e94-fcc8fbf8606b,ResourceVersion:19522977,Generation:2,CreationTimestamp:2020-01-06 13:24:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-06 13:24:54 +0000 UTC 2020-01-06 13:24:54 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-06 13:25:14 +0000 UTC 2020-01-06 13:24:54 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rollover-deployment-854595fc44" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},} Jan 6 13:25:16.260: INFO: New ReplicaSet "test-rollover-deployment-854595fc44" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44,GenerateName:,Namespace:deployment-3362,SelfLink:/apis/apps/v1/namespaces/deployment-3362/replicasets/test-rollover-deployment-854595fc44,UID:aadc0914-5016-48a0-96be-28f1757edf2b,ResourceVersion:19522967,Generation:2,CreationTimestamp:2020-01-06 13:24:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1a3bdd65-f419-4898-9e94-fcc8fbf8606b 0xc000b11a47 0xc000b11a48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},} Jan 6 13:25:16.260: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": Jan 6 13:25:16.260: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-controller,GenerateName:,Namespace:deployment-3362,SelfLink:/apis/apps/v1/namespaces/deployment-3362/replicasets/test-rollover-controller,UID:55d37178-ce5d-4701-a6c2-522638bbfe8b,ResourceVersion:19522976,Generation:2,CreationTimestamp:2020-01-06 13:24:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1a3bdd65-f419-4898-9e94-fcc8fbf8606b 0xc000b11977 0xc000b11978}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 6 13:25:16.260: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-9b8b997cf,GenerateName:,Namespace:deployment-3362,SelfLink:/apis/apps/v1/namespaces/deployment-3362/replicasets/test-rollover-deployment-9b8b997cf,UID:a6d60e28-9853-44f7-ae11-d2618fa99b49,ResourceVersion:19522931,Generation:2,CreationTimestamp:2020-01-06 13:24:54 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-rollover-deployment 1a3bdd65-f419-4898-9e94-fcc8fbf8606b 0xc000b11b10 0xc000b11b11}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 9b8b997cf,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 6 13:25:16.268: INFO: Pod "test-rollover-deployment-854595fc44-twx7q" is available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rollover-deployment-854595fc44-twx7q,GenerateName:test-rollover-deployment-854595fc44-,Namespace:deployment-3362,SelfLink:/api/v1/namespaces/deployment-3362/pods/test-rollover-deployment-854595fc44-twx7q,UID:43d63911-db5e-4e45-a790-3c413ab7e3f3,ResourceVersion:19522950,Generation:0,CreationTimestamp:2020-01-06 13:24:56 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: rollover-pod,pod-template-hash: 854595fc44,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rollover-deployment-854595fc44 aadc0914-5016-48a0-96be-28f1757edf2b 0xc002972f17 0xc002972f18}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-g6jjw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-g6jjw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [{default-token-g6jjw true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc002972f90} {node.kubernetes.io/unreachable Exists NoExecute 0xc002972fb0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:24:56 +0000 UTC } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:25:04 +0000 UTC } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:25:04 +0000 UTC } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:24:56 +0000 UTC }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-06 13:24:56 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-06 13:25:03 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://1d80144b291b549f0c96d52c7f29e2f746c81aa07e2d06ad3048d94d56a24d56}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:25:16.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3362" for this suite. Jan 6 13:25:22.314: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:25:22.583: INFO: namespace deployment-3362 deletion completed in 6.308359576s • [SLOW TEST:37.613 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:25:22.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name configmap-test-upd-ebade3c1-f710-4d73-9bff-1ca6859e846a STEP: Creating the pod STEP: Updating configmap configmap-test-upd-ebade3c1-f710-4d73-9bff-1ca6859e846a STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:25:36.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6819" for this suite. Jan 6 13:25:58.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:25:59.115: INFO: namespace configmap-6819 deletion completed in 22.151078979s • [SLOW TEST:36.532 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:25:59.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7573, will wait for the garbage collector to delete the pods Jan 6 13:26:11.302: INFO: Deleting Job.batch foo took: 11.870239ms Jan 6 13:26:11.603: INFO: Terminating Job.batch foo pods took: 300.475134ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:26:56.809: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7573" for this suite. Jan 6 13:27:02.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:27:02.964: INFO: namespace job-7573 deletion completed in 6.151948313s • [SLOW TEST:63.849 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:27:02.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75 STEP: Creating service test in namespace statefulset-1085 [It] Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating stateful set ss in namespace statefulset-1085 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1085 Jan 6 13:27:03.081: INFO: Found 0 stateful pods, waiting for 1 Jan 6 13:27:13.098: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 6 13:27:13.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 6 13:27:13.813: INFO: stderr: "I0106 13:27:13.328408 378 log.go:172] (0xc0008c0420) (0xc0007e2640) Create stream\nI0106 13:27:13.328712 378 log.go:172] (0xc0008c0420) (0xc0007e2640) Stream added, broadcasting: 1\nI0106 13:27:13.337386 378 log.go:172] (0xc0008c0420) Reply frame received for 1\nI0106 13:27:13.337467 378 log.go:172] (0xc0008c0420) (0xc0009ee000) Create stream\nI0106 13:27:13.337485 378 log.go:172] (0xc0008c0420) (0xc0009ee000) Stream added, broadcasting: 3\nI0106 13:27:13.340678 378 log.go:172] (0xc0008c0420) Reply frame received for 3\nI0106 13:27:13.340903 378 log.go:172] (0xc0008c0420) (0xc0007e26e0) Create stream\nI0106 13:27:13.340925 378 log.go:172] (0xc0008c0420) (0xc0007e26e0) Stream added, broadcasting: 5\nI0106 13:27:13.342813 378 log.go:172] (0xc0008c0420) Reply frame received for 5\nI0106 13:27:13.520894 378 log.go:172] (0xc0008c0420) Data frame received for 5\nI0106 13:27:13.520949 378 log.go:172] (0xc0007e26e0) (5) Data frame handling\nI0106 13:27:13.520965 378 log.go:172] (0xc0007e26e0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0106 13:27:13.595554 378 log.go:172] (0xc0008c0420) Data frame received for 3\nI0106 13:27:13.595624 378 log.go:172] (0xc0009ee000) (3) Data frame handling\nI0106 13:27:13.595658 378 log.go:172] (0xc0009ee000) (3) Data frame sent\nI0106 13:27:13.796212 378 log.go:172] (0xc0008c0420) (0xc0009ee000) Stream removed, broadcasting: 3\nI0106 13:27:13.796927 378 log.go:172] (0xc0008c0420) Data frame received for 1\nI0106 13:27:13.797349 378 log.go:172] (0xc0008c0420) (0xc0007e26e0) Stream removed, broadcasting: 5\nI0106 13:27:13.797722 378 log.go:172] (0xc0007e2640) (1) Data frame handling\nI0106 13:27:13.797770 378 log.go:172] (0xc0007e2640) (1) Data frame sent\nI0106 13:27:13.797811 378 log.go:172] (0xc0008c0420) (0xc0007e2640) Stream removed, broadcasting: 1\nI0106 13:27:13.797846 378 log.go:172] (0xc0008c0420) Go away received\nI0106 13:27:13.799867 378 log.go:172] (0xc0008c0420) (0xc0007e2640) Stream removed, broadcasting: 1\nI0106 13:27:13.799934 378 log.go:172] (0xc0008c0420) (0xc0009ee000) Stream removed, broadcasting: 3\nI0106 13:27:13.799950 378 log.go:172] (0xc0008c0420) (0xc0007e26e0) Stream removed, broadcasting: 5\n" Jan 6 13:27:13.814: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 6 13:27:13.814: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 6 13:27:13.828: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Jan 6 13:27:23.842: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 6 13:27:23.842: INFO: Waiting for statefulset status.replicas updated to 0 Jan 6 13:27:23.887: INFO: POD NODE PHASE GRACE CONDITIONS Jan 6 13:27:23.887: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:14 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:03 +0000 UTC }] Jan 6 13:27:23.887: INFO: Jan 6 13:27:23.887: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 6 13:27:25.588: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990497218s Jan 6 13:27:26.848: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.289072307s Jan 6 13:27:27.873: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.028810007s Jan 6 13:27:28.882: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.004989864s Jan 6 13:27:30.105: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.995526731s Jan 6 13:27:31.264: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.772792277s Jan 6 13:27:32.320: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.613123834s Jan 6 13:27:33.328: INFO: Verifying statefulset ss doesn't scale past 3 for another 558.01872ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1085 Jan 6 13:27:34.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:27:34.995: INFO: stderr: "I0106 13:27:34.631762 397 log.go:172] (0xc00012a790) (0xc0003cc6e0) Create stream\nI0106 13:27:34.632339 397 log.go:172] (0xc00012a790) (0xc0003cc6e0) Stream added, broadcasting: 1\nI0106 13:27:34.648871 397 log.go:172] (0xc00012a790) Reply frame received for 1\nI0106 13:27:34.656633 397 log.go:172] (0xc00012a790) (0xc0007ec000) Create stream\nI0106 13:27:34.656864 397 log.go:172] (0xc00012a790) (0xc0007ec000) Stream added, broadcasting: 3\nI0106 13:27:34.661277 397 log.go:172] (0xc00012a790) Reply frame received for 3\nI0106 13:27:34.661441 397 log.go:172] (0xc00012a790) (0xc0007ec0a0) Create stream\nI0106 13:27:34.661467 397 log.go:172] (0xc00012a790) (0xc0007ec0a0) Stream added, broadcasting: 5\nI0106 13:27:34.666846 397 log.go:172] (0xc00012a790) Reply frame received for 5\nI0106 13:27:34.856855 397 log.go:172] (0xc00012a790) Data frame received for 3\nI0106 13:27:34.856958 397 log.go:172] (0xc0007ec000) (3) Data frame handling\nI0106 13:27:34.856996 397 log.go:172] (0xc0007ec000) (3) Data frame sent\nI0106 13:27:34.857035 397 log.go:172] (0xc00012a790) Data frame received for 5\nI0106 13:27:34.857057 397 log.go:172] (0xc0007ec0a0) (5) Data frame handling\nI0106 13:27:34.857092 397 log.go:172] (0xc0007ec0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0106 13:27:34.984982 397 log.go:172] (0xc00012a790) (0xc0007ec0a0) Stream removed, broadcasting: 5\nI0106 13:27:34.985095 397 log.go:172] (0xc00012a790) Data frame received for 1\nI0106 13:27:34.985129 397 log.go:172] (0xc00012a790) (0xc0007ec000) Stream removed, broadcasting: 3\nI0106 13:27:34.985211 397 log.go:172] (0xc0003cc6e0) (1) Data frame handling\nI0106 13:27:34.985246 397 log.go:172] (0xc0003cc6e0) (1) Data frame sent\nI0106 13:27:34.985263 397 log.go:172] (0xc00012a790) (0xc0003cc6e0) Stream removed, broadcasting: 1\nI0106 13:27:34.985278 397 log.go:172] (0xc00012a790) Go away received\nI0106 13:27:34.986058 397 log.go:172] (0xc00012a790) (0xc0003cc6e0) Stream removed, broadcasting: 1\nI0106 13:27:34.986077 397 log.go:172] (0xc00012a790) (0xc0007ec000) Stream removed, broadcasting: 3\nI0106 13:27:34.986087 397 log.go:172] (0xc00012a790) (0xc0007ec0a0) Stream removed, broadcasting: 5\n" Jan 6 13:27:34.995: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 6 13:27:34.995: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 6 13:27:34.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:27:35.443: INFO: stderr: "I0106 13:27:35.159121 419 log.go:172] (0xc000a86580) (0xc000544a00) Create stream\nI0106 13:27:35.159433 419 log.go:172] (0xc000a86580) (0xc000544a00) Stream added, broadcasting: 1\nI0106 13:27:35.163794 419 log.go:172] (0xc000a86580) Reply frame received for 1\nI0106 13:27:35.163860 419 log.go:172] (0xc000a86580) (0xc000a82000) Create stream\nI0106 13:27:35.163883 419 log.go:172] (0xc000a86580) (0xc000a82000) Stream added, broadcasting: 3\nI0106 13:27:35.164823 419 log.go:172] (0xc000a86580) Reply frame received for 3\nI0106 13:27:35.164864 419 log.go:172] (0xc000a86580) (0xc0007e4000) Create stream\nI0106 13:27:35.164875 419 log.go:172] (0xc000a86580) (0xc0007e4000) Stream added, broadcasting: 5\nI0106 13:27:35.166158 419 log.go:172] (0xc000a86580) Reply frame received for 5\nI0106 13:27:35.295625 419 log.go:172] (0xc000a86580) Data frame received for 5\nI0106 13:27:35.295723 419 log.go:172] (0xc0007e4000) (5) Data frame handling\nI0106 13:27:35.295749 419 log.go:172] (0xc0007e4000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0106 13:27:35.350931 419 log.go:172] (0xc000a86580) Data frame received for 5\nI0106 13:27:35.351006 419 log.go:172] (0xc0007e4000) (5) Data frame handling\nI0106 13:27:35.351038 419 log.go:172] (0xc0007e4000) (5) Data frame sent\nmv: can't rename '/tmp/index.html': No such file or directory\nI0106 13:27:35.351065 419 log.go:172] (0xc000a86580) Data frame received for 5\nI0106 13:27:35.351082 419 log.go:172] (0xc0007e4000) (5) Data frame handling\nI0106 13:27:35.351097 419 log.go:172] (0xc0007e4000) (5) Data frame sent\n+ true\nI0106 13:27:35.351131 419 log.go:172] (0xc000a86580) Data frame received for 3\nI0106 13:27:35.351206 419 log.go:172] (0xc000a82000) (3) Data frame handling\nI0106 13:27:35.351225 419 log.go:172] (0xc000a82000) (3) Data frame sent\nI0106 13:27:35.430251 419 log.go:172] (0xc000a86580) Data frame received for 1\nI0106 13:27:35.430357 419 log.go:172] (0xc000a86580) (0xc000a82000) Stream removed, broadcasting: 3\nI0106 13:27:35.430413 419 log.go:172] (0xc000544a00) (1) Data frame handling\nI0106 13:27:35.430444 419 log.go:172] (0xc000544a00) (1) Data frame sent\nI0106 13:27:35.430631 419 log.go:172] (0xc000a86580) (0xc0007e4000) Stream removed, broadcasting: 5\nI0106 13:27:35.430671 419 log.go:172] (0xc000a86580) (0xc000544a00) Stream removed, broadcasting: 1\nI0106 13:27:35.430694 419 log.go:172] (0xc000a86580) Go away received\nI0106 13:27:35.431554 419 log.go:172] (0xc000a86580) (0xc000544a00) Stream removed, broadcasting: 1\nI0106 13:27:35.431569 419 log.go:172] (0xc000a86580) (0xc000a82000) Stream removed, broadcasting: 3\nI0106 13:27:35.431575 419 log.go:172] (0xc000a86580) (0xc0007e4000) Stream removed, broadcasting: 5\n" Jan 6 13:27:35.443: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 6 13:27:35.443: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 6 13:27:35.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:27:36.004: INFO: stderr: "I0106 13:27:35.677364 438 log.go:172] (0xc000ac0000) (0xc0009a8140) Create stream\nI0106 13:27:35.677574 438 log.go:172] (0xc000ac0000) (0xc0009a8140) Stream added, broadcasting: 1\nI0106 13:27:35.684767 438 log.go:172] (0xc000ac0000) Reply frame received for 1\nI0106 13:27:35.684817 438 log.go:172] (0xc000ac0000) (0xc0007a0fa0) Create stream\nI0106 13:27:35.684829 438 log.go:172] (0xc000ac0000) (0xc0007a0fa0) Stream added, broadcasting: 3\nI0106 13:27:35.685993 438 log.go:172] (0xc000ac0000) Reply frame received for 3\nI0106 13:27:35.686023 438 log.go:172] (0xc000ac0000) (0xc0002820a0) Create stream\nI0106 13:27:35.686035 438 log.go:172] (0xc000ac0000) (0xc0002820a0) Stream added, broadcasting: 5\nI0106 13:27:35.688191 438 log.go:172] (0xc000ac0000) Reply frame received for 5\nI0106 13:27:35.821510 438 log.go:172] (0xc000ac0000) Data frame received for 5\nI0106 13:27:35.821710 438 log.go:172] (0xc0002820a0) (5) Data frame handling\nI0106 13:27:35.821769 438 log.go:172] (0xc0002820a0) (5) Data frame sent\nI0106 13:27:35.821779 438 log.go:172] (0xc000ac0000) Data frame received for 5\nI0106 13:27:35.821816 438 log.go:172] (0xc0002820a0) (5) Data frame handling\nI0106 13:27:35.821832 438 log.go:172] (0xc000ac0000) Data frame received for 3\nI0106 13:27:35.821838 438 log.go:172] (0xc0007a0fa0) (3) Data frame handling\nI0106 13:27:35.821858 438 log.go:172] (0xc0007a0fa0) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0106 13:27:35.822257 438 log.go:172] (0xc0002820a0) (5) Data frame sent\nI0106 13:27:35.992724 438 log.go:172] (0xc000ac0000) Data frame received for 1\nI0106 13:27:35.992827 438 log.go:172] (0xc000ac0000) (0xc0002820a0) Stream removed, broadcasting: 5\nI0106 13:27:35.992947 438 log.go:172] (0xc000ac0000) (0xc0007a0fa0) Stream removed, broadcasting: 3\nI0106 13:27:35.993083 438 log.go:172] (0xc0009a8140) (1) Data frame handling\nI0106 13:27:35.993170 438 log.go:172] (0xc0009a8140) (1) Data frame sent\nI0106 13:27:35.993211 438 log.go:172] (0xc000ac0000) (0xc0009a8140) Stream removed, broadcasting: 1\nI0106 13:27:35.993226 438 log.go:172] (0xc000ac0000) Go away received\nI0106 13:27:35.994435 438 log.go:172] (0xc000ac0000) (0xc0009a8140) Stream removed, broadcasting: 1\nI0106 13:27:35.994455 438 log.go:172] (0xc000ac0000) (0xc0007a0fa0) Stream removed, broadcasting: 3\nI0106 13:27:35.994470 438 log.go:172] (0xc000ac0000) (0xc0002820a0) Stream removed, broadcasting: 5\n" Jan 6 13:27:36.004: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n" Jan 6 13:27:36.004: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: '/tmp/index.html' -> '/usr/share/nginx/html/index.html' Jan 6 13:27:36.012: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 6 13:27:36.012: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 6 13:27:36.012: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 6 13:27:36.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 6 13:27:36.649: INFO: stderr: "I0106 13:27:36.176032 459 log.go:172] (0xc000116dc0) (0xc0002de820) Create stream\nI0106 13:27:36.176212 459 log.go:172] (0xc000116dc0) (0xc0002de820) Stream added, broadcasting: 1\nI0106 13:27:36.180914 459 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0106 13:27:36.180968 459 log.go:172] (0xc000116dc0) (0xc000842000) Create stream\nI0106 13:27:36.181003 459 log.go:172] (0xc000116dc0) (0xc000842000) Stream added, broadcasting: 3\nI0106 13:27:36.181990 459 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0106 13:27:36.182012 459 log.go:172] (0xc000116dc0) (0xc00064e280) Create stream\nI0106 13:27:36.182024 459 log.go:172] (0xc000116dc0) (0xc00064e280) Stream added, broadcasting: 5\nI0106 13:27:36.183173 459 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0106 13:27:36.350528 459 log.go:172] (0xc000116dc0) Data frame received for 5\nI0106 13:27:36.350775 459 log.go:172] (0xc00064e280) (5) Data frame handling\nI0106 13:27:36.350828 459 log.go:172] (0xc000116dc0) Data frame received for 3\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0106 13:27:36.350880 459 log.go:172] (0xc000842000) (3) Data frame handling\nI0106 13:27:36.350891 459 log.go:172] (0xc00064e280) (5) Data frame sent\nI0106 13:27:36.350927 459 log.go:172] (0xc000842000) (3) Data frame sent\nI0106 13:27:36.631132 459 log.go:172] (0xc000116dc0) (0xc00064e280) Stream removed, broadcasting: 5\nI0106 13:27:36.631498 459 log.go:172] (0xc000116dc0) Data frame received for 1\nI0106 13:27:36.631542 459 log.go:172] (0xc000116dc0) (0xc000842000) Stream removed, broadcasting: 3\nI0106 13:27:36.631621 459 log.go:172] (0xc0002de820) (1) Data frame handling\nI0106 13:27:36.631668 459 log.go:172] (0xc0002de820) (1) Data frame sent\nI0106 13:27:36.631700 459 log.go:172] (0xc000116dc0) (0xc0002de820) Stream removed, broadcasting: 1\nI0106 13:27:36.631749 459 log.go:172] (0xc000116dc0) Go away received\nI0106 13:27:36.634224 459 log.go:172] (0xc000116dc0) (0xc0002de820) Stream removed, broadcasting: 1\nI0106 13:27:36.634403 459 log.go:172] (0xc000116dc0) (0xc000842000) Stream removed, broadcasting: 3\nI0106 13:27:36.634410 459 log.go:172] (0xc000116dc0) (0xc00064e280) Stream removed, broadcasting: 5\n" Jan 6 13:27:36.649: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 6 13:27:36.649: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 6 13:27:36.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 6 13:27:36.989: INFO: stderr: "I0106 13:27:36.784977 476 log.go:172] (0xc000118dc0) (0xc000304820) Create stream\nI0106 13:27:36.785160 476 log.go:172] (0xc000118dc0) (0xc000304820) Stream added, broadcasting: 1\nI0106 13:27:36.787712 476 log.go:172] (0xc000118dc0) Reply frame received for 1\nI0106 13:27:36.787751 476 log.go:172] (0xc000118dc0) (0xc0009d2000) Create stream\nI0106 13:27:36.787761 476 log.go:172] (0xc000118dc0) (0xc0009d2000) Stream added, broadcasting: 3\nI0106 13:27:36.788521 476 log.go:172] (0xc000118dc0) Reply frame received for 3\nI0106 13:27:36.788539 476 log.go:172] (0xc000118dc0) (0xc0003048c0) Create stream\nI0106 13:27:36.788545 476 log.go:172] (0xc000118dc0) (0xc0003048c0) Stream added, broadcasting: 5\nI0106 13:27:36.789565 476 log.go:172] (0xc000118dc0) Reply frame received for 5\nI0106 13:27:36.862033 476 log.go:172] (0xc000118dc0) Data frame received for 5\nI0106 13:27:36.862113 476 log.go:172] (0xc0003048c0) (5) Data frame handling\nI0106 13:27:36.862149 476 log.go:172] (0xc0003048c0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0106 13:27:36.897154 476 log.go:172] (0xc000118dc0) Data frame received for 3\nI0106 13:27:36.897266 476 log.go:172] (0xc0009d2000) (3) Data frame handling\nI0106 13:27:36.897294 476 log.go:172] (0xc0009d2000) (3) Data frame sent\nI0106 13:27:36.978423 476 log.go:172] (0xc000118dc0) Data frame received for 1\nI0106 13:27:36.978489 476 log.go:172] (0xc000118dc0) (0xc0009d2000) Stream removed, broadcasting: 3\nI0106 13:27:36.978542 476 log.go:172] (0xc000304820) (1) Data frame handling\nI0106 13:27:36.978581 476 log.go:172] (0xc000118dc0) (0xc0003048c0) Stream removed, broadcasting: 5\nI0106 13:27:36.978620 476 log.go:172] (0xc000304820) (1) Data frame sent\nI0106 13:27:36.978655 476 log.go:172] (0xc000118dc0) (0xc000304820) Stream removed, broadcasting: 1\nI0106 13:27:36.979000 476 log.go:172] (0xc000118dc0) Go away received\nI0106 13:27:36.979109 476 log.go:172] (0xc000118dc0) (0xc000304820) Stream removed, broadcasting: 1\nI0106 13:27:36.979127 476 log.go:172] (0xc000118dc0) (0xc0009d2000) Stream removed, broadcasting: 3\nI0106 13:27:36.979134 476 log.go:172] (0xc000118dc0) (0xc0003048c0) Stream removed, broadcasting: 5\n" Jan 6 13:27:36.989: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 6 13:27:36.989: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 6 13:27:36.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true' Jan 6 13:27:37.505: INFO: stderr: "I0106 13:27:37.167677 497 log.go:172] (0xc000104dc0) (0xc000874640) Create stream\nI0106 13:27:37.168011 497 log.go:172] (0xc000104dc0) (0xc000874640) Stream added, broadcasting: 1\nI0106 13:27:37.174252 497 log.go:172] (0xc000104dc0) Reply frame received for 1\nI0106 13:27:37.174427 497 log.go:172] (0xc000104dc0) (0xc0008746e0) Create stream\nI0106 13:27:37.174445 497 log.go:172] (0xc000104dc0) (0xc0008746e0) Stream added, broadcasting: 3\nI0106 13:27:37.176461 497 log.go:172] (0xc000104dc0) Reply frame received for 3\nI0106 13:27:37.176491 497 log.go:172] (0xc000104dc0) (0xc00077a320) Create stream\nI0106 13:27:37.176514 497 log.go:172] (0xc000104dc0) (0xc00077a320) Stream added, broadcasting: 5\nI0106 13:27:37.177601 497 log.go:172] (0xc000104dc0) Reply frame received for 5\nI0106 13:27:37.280871 497 log.go:172] (0xc000104dc0) Data frame received for 5\nI0106 13:27:37.280977 497 log.go:172] (0xc00077a320) (5) Data frame handling\nI0106 13:27:37.281011 497 log.go:172] (0xc00077a320) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0106 13:27:37.316692 497 log.go:172] (0xc000104dc0) Data frame received for 3\nI0106 13:27:37.316730 497 log.go:172] (0xc0008746e0) (3) Data frame handling\nI0106 13:27:37.316759 497 log.go:172] (0xc0008746e0) (3) Data frame sent\nI0106 13:27:37.486924 497 log.go:172] (0xc000104dc0) (0xc0008746e0) Stream removed, broadcasting: 3\nI0106 13:27:37.487339 497 log.go:172] (0xc000104dc0) Data frame received for 1\nI0106 13:27:37.487588 497 log.go:172] (0xc000104dc0) (0xc00077a320) Stream removed, broadcasting: 5\nI0106 13:27:37.487662 497 log.go:172] (0xc000874640) (1) Data frame handling\nI0106 13:27:37.487689 497 log.go:172] (0xc000874640) (1) Data frame sent\nI0106 13:27:37.487712 497 log.go:172] (0xc000104dc0) (0xc000874640) Stream removed, broadcasting: 1\nI0106 13:27:37.487743 497 log.go:172] (0xc000104dc0) Go away received\nI0106 13:27:37.489440 497 log.go:172] (0xc000104dc0) (0xc000874640) Stream removed, broadcasting: 1\nI0106 13:27:37.489552 497 log.go:172] (0xc000104dc0) (0xc0008746e0) Stream removed, broadcasting: 3\nI0106 13:27:37.489571 497 log.go:172] (0xc000104dc0) (0xc00077a320) Stream removed, broadcasting: 5\n" Jan 6 13:27:37.505: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n" Jan 6 13:27:37.505: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html' Jan 6 13:27:37.505: INFO: Waiting for statefulset status.replicas updated to 0 Jan 6 13:27:37.514: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 Jan 6 13:27:47.531: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 6 13:27:47.532: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 6 13:27:47.532: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 6 13:27:47.554: INFO: POD NODE PHASE GRACE CONDITIONS Jan 6 13:27:47.554: INFO: ss-0 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:03 +0000 UTC }] Jan 6 13:27:47.554: INFO: ss-1 iruya-server-sfge57q7djm7 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:23 +0000 UTC }] Jan 6 13:27:47.554: INFO: ss-2 iruya-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:23 +0000 UTC }] Jan 6 13:27:47.554: INFO: Jan 6 13:27:47.554: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 6 13:27:49.394: INFO: POD NODE PHASE GRACE CONDITIONS Jan 6 13:27:49.394: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:03 +0000 UTC }] Jan 6 13:27:49.394: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:23 +0000 UTC }] Jan 6 13:27:49.394: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:23 +0000 UTC }] Jan 6 13:27:49.395: INFO: Jan 6 13:27:49.395: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 6 13:27:50.405: INFO: POD NODE PHASE GRACE CONDITIONS Jan 6 13:27:50.405: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:03 +0000 UTC }] Jan 6 13:27:50.405: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:23 +0000 UTC }] Jan 6 13:27:50.405: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:23 +0000 UTC }] Jan 6 13:27:50.405: INFO: Jan 6 13:27:50.405: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 6 13:27:51.588: INFO: POD NODE PHASE GRACE CONDITIONS Jan 6 13:27:51.589: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:03 +0000 UTC }] Jan 6 13:27:51.589: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:23 +0000 UTC }] Jan 6 13:27:51.589: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:23 +0000 UTC }] Jan 6 13:27:51.589: INFO: Jan 6 13:27:51.589: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 6 13:27:52.606: INFO: POD NODE PHASE GRACE CONDITIONS Jan 6 13:27:52.607: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:03 +0000 UTC }] Jan 6 13:27:52.607: INFO: ss-1 iruya-server-sfge57q7djm7 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:23 +0000 UTC }] Jan 6 13:27:52.607: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:23 +0000 UTC }] Jan 6 13:27:52.607: INFO: Jan 6 13:27:52.607: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 6 13:27:53.623: INFO: POD NODE PHASE GRACE CONDITIONS Jan 6 13:27:53.623: INFO: ss-0 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:03 +0000 UTC }] Jan 6 13:27:53.623: INFO: ss-1 iruya-server-sfge57q7djm7 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:23 +0000 UTC }] Jan 6 13:27:53.623: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:23 +0000 UTC }] Jan 6 13:27:53.623: INFO: Jan 6 13:27:53.623: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 6 13:27:54.636: INFO: POD NODE PHASE GRACE CONDITIONS Jan 6 13:27:54.636: INFO: ss-0 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:03 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:36 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:03 +0000 UTC }] Jan 6 13:27:54.636: INFO: ss-2 iruya-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:23 +0000 UTC }] Jan 6 13:27:54.636: INFO: Jan 6 13:27:54.636: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 6 13:27:55.648: INFO: POD NODE PHASE GRACE CONDITIONS Jan 6 13:27:55.648: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:23 +0000 UTC }] Jan 6 13:27:55.648: INFO: Jan 6 13:27:55.648: INFO: StatefulSet ss has not reached scale 0, at 1 Jan 6 13:27:56.660: INFO: POD NODE PHASE GRACE CONDITIONS Jan 6 13:27:56.660: INFO: ss-2 iruya-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:24 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:37 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:27:23 +0000 UTC }] Jan 6 13:27:56.660: INFO: Jan 6 13:27:56.660: INFO: StatefulSet ss has not reached scale 0, at 1 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1085 Jan 6 13:27:57.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:27:57.947: INFO: rc: 1 Jan 6 13:27:57.947: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] error: unable to upgrade connection: container not found ("nginx") [] 0xc001edfdd0 exit status 1 true [0xc0019ce068 0xc0019ce080 0xc0019ce098] [0xc0019ce068 0xc0019ce080 0xc0019ce098] [0xc0019ce078 0xc0019ce090] [0xba6c50 0xba6c50] 0xc002dfecc0 }: Command stdout: stderr: error: unable to upgrade connection: container not found ("nginx") error: exit status 1 Jan 6 13:28:07.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:28:08.141: INFO: rc: 1 Jan 6 13:28:08.141: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001d2d950 exit status 1 true [0xc00039a758 0xc00039a800 0xc00039a850] [0xc00039a758 0xc00039a800 0xc00039a850] [0xc00039a7d8 0xc00039a830] [0xba6c50 0xba6c50] 0xc003086f60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:28:18.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:28:18.312: INFO: rc: 1 Jan 6 13:28:18.312: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002f476e0 exit status 1 true [0xc00146e1d8 0xc00146e1f0 0xc00146e208] [0xc00146e1d8 0xc00146e1f0 0xc00146e208] [0xc00146e1e8 0xc00146e200] [0xba6c50 0xba6c50] 0xc0020924e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:28:28.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:28:28.556: INFO: rc: 1 Jan 6 13:28:28.557: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001edfef0 exit status 1 true [0xc0019ce0a0 0xc0019ce0b8 0xc0019ce0d0] [0xc0019ce0a0 0xc0019ce0b8 0xc0019ce0d0] [0xc0019ce0b0 0xc0019ce0c8] [0xba6c50 0xba6c50] 0xc002dff740 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:28:38.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:28:38.730: INFO: rc: 1 Jan 6 13:28:38.730: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00195b4a0 exit status 1 true [0xc0006a5d88 0xc0006a5f98 0xc0028b2008] [0xc0006a5d88 0xc0006a5f98 0xc0028b2008] [0xc0006a5ec8 0xc0028b2000] [0xba6c50 0xba6c50] 0xc001f58660 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:28:48.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:28:48.941: INFO: rc: 1 Jan 6 13:28:48.941: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001d2da10 exit status 1 true [0xc00039a8a0 0xc00039a910 0xc00039a998] [0xc00039a8a0 0xc00039a910 0xc00039a998] [0xc00039a900 0xc00039a968] [0xba6c50 0xba6c50] 0xc003087500 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:28:58.942: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:28:59.145: INFO: rc: 1 Jan 6 13:28:59.145: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00195b5c0 exit status 1 true [0xc0028b2010 0xc0028b2028 0xc0028b2040] [0xc0028b2010 0xc0028b2028 0xc0028b2040] [0xc0028b2020 0xc0028b2038] [0xba6c50 0xba6c50] 0xc0028d2840 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:29:09.146: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:29:09.363: INFO: rc: 1 Jan 6 13:29:09.363: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc001d2db30 exit status 1 true [0xc00039a9b8 0xc00039a9f8 0xc00039aa18] [0xc00039a9b8 0xc00039a9f8 0xc00039aa18] [0xc00039a9e8 0xc00039aa10] [0xba6c50 0xba6c50] 0xc003087b60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:29:19.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:29:19.585: INFO: rc: 1 Jan 6 13:29:19.585: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002f478c0 exit status 1 true [0xc00146e210 0xc00146e228 0xc00146e240] [0xc00146e210 0xc00146e228 0xc00146e240] [0xc00146e220 0xc00146e238] [0xba6c50 0xba6c50] 0xc001c940c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:29:29.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:29:29.757: INFO: rc: 1 Jan 6 13:29:29.757: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00279e090 exit status 1 true [0xc0006a50e0 0xc0006a5260 0xc0006a53b0] [0xc0006a50e0 0xc0006a5260 0xc0006a53b0] [0xc0006a5158 0xc0006a5310] [0xba6c50 0xba6c50] 0xc001f58e40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:29:39.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:29:39.930: INFO: rc: 1 Jan 6 13:29:39.930: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00279e180 exit status 1 true [0xc0006a53d0 0xc0006a5648 0xc0006a5808] [0xc0006a53d0 0xc0006a5648 0xc0006a5808] [0xc0006a5540 0xc0006a57c0] [0xba6c50 0xba6c50] 0xc001b6cf60 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:29:49.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:29:50.091: INFO: rc: 1 Jan 6 13:29:50.092: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002410090 exit status 1 true [0xc000186000 0xc000876070 0xc000876630] [0xc000186000 0xc000876070 0xc000876630] [0xc000186498 0xc000876430] [0xba6c50 0xba6c50] 0xc001f187e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:30:00.092: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:30:00.303: INFO: rc: 1 Jan 6 13:30:00.303: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00279e240 exit status 1 true [0xc0006a58b8 0xc0006a59b8 0xc0006a5af8] [0xc0006a58b8 0xc0006a59b8 0xc0006a5af8] [0xc0006a5948 0xc0006a5ae0] [0xba6c50 0xba6c50] 0xc0021abec0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:30:10.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:30:10.568: INFO: rc: 1 Jan 6 13:30:10.568: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00279e300 exit status 1 true [0xc0006a5b18 0xc0006a5e08 0xc0006a5fe8] [0xc0006a5b18 0xc0006a5e08 0xc0006a5fe8] [0xc0006a5d88 0xc0006a5f98] [0xba6c50 0xba6c50] 0xc0023d9680 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:30:20.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:30:20.724: INFO: rc: 1 Jan 6 13:30:20.725: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002f460f0 exit status 1 true [0xc00146e000 0xc00146e018 0xc00146e030] [0xc00146e000 0xc00146e018 0xc00146e030] [0xc00146e010 0xc00146e028] [0xba6c50 0xba6c50] 0xc00256e5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:30:30.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:30:30.898: INFO: rc: 1 Jan 6 13:30:30.898: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024101b0 exit status 1 true [0xc000876658 0xc0008766d8 0xc000876850] [0xc000876658 0xc0008766d8 0xc000876850] [0xc000876698 0xc0008767d0] [0xba6c50 0xba6c50] 0xc001f195c0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:30:40.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:30:41.102: INFO: rc: 1 Jan 6 13:30:41.102: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024102a0 exit status 1 true [0xc000876898 0xc000877020 0xc0008771a0] [0xc000876898 0xc000877020 0xc0008771a0] [0xc000876a68 0xc000877130] [0xba6c50 0xba6c50] 0xc0020e8540 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:30:51.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:30:51.326: INFO: rc: 1 Jan 6 13:30:51.326: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00233a0c0 exit status 1 true [0xc00039a008 0xc00039a098 0xc00039a2e8] [0xc00039a008 0xc00039a098 0xc00039a2e8] [0xc00039a068 0xc00039a260] [0xba6c50 0xba6c50] 0xc001d84a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:31:01.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:31:01.469: INFO: rc: 1 Jan 6 13:31:01.469: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002f46210 exit status 1 true [0xc00146e038 0xc00146e050 0xc00146e068] [0xc00146e038 0xc00146e050 0xc00146e068] [0xc00146e048 0xc00146e060] [0xba6c50 0xba6c50] 0xc00256eb40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:31:11.470: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:31:11.689: INFO: rc: 1 Jan 6 13:31:11.689: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00233a180 exit status 1 true [0xc00039a2f0 0xc00039a360 0xc00039a3b8] [0xc00039a2f0 0xc00039a360 0xc00039a3b8] [0xc00039a330 0xc00039a380] [0xba6c50 0xba6c50] 0xc001d85560 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:31:21.690: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:31:21.904: INFO: rc: 1 Jan 6 13:31:21.904: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002410390 exit status 1 true [0xc0008772d0 0xc0008773a8 0xc0008775b0] [0xc0008772d0 0xc0008773a8 0xc0008775b0] [0xc000877390 0xc000877478] [0xba6c50 0xba6c50] 0xc0020e9440 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:31:31.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:31:32.164: INFO: rc: 1 Jan 6 13:31:32.164: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024100c0 exit status 1 true [0xc000186018 0xc0008761b0 0xc000876658] [0xc000186018 0xc0008761b0 0xc000876658] [0xc000876070 0xc000876630] [0xba6c50 0xba6c50] 0xc0021ab9e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:31:42.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:31:42.410: INFO: rc: 1 Jan 6 13:31:42.410: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002f460c0 exit status 1 true [0xc00146e000 0xc00146e018 0xc00146e030] [0xc00146e000 0xc00146e018 0xc00146e030] [0xc00146e010 0xc00146e028] [0xba6c50 0xba6c50] 0xc001f19260 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:31:52.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:31:54.852: INFO: rc: 1 Jan 6 13:31:54.852: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002f461e0 exit status 1 true [0xc00146e038 0xc00146e050 0xc00146e068] [0xc00146e038 0xc00146e050 0xc00146e068] [0xc00146e048 0xc00146e060] [0xba6c50 0xba6c50] 0xc0020924e0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:32:04.853: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:32:05.030: INFO: rc: 1 Jan 6 13:32:05.030: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002410180 exit status 1 true [0xc000876668 0xc0008767b0 0xc000876898] [0xc000876668 0xc0008767b0 0xc000876898] [0xc0008766d8 0xc000876850] [0xba6c50 0xba6c50] 0xc001f58f00 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:32:15.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:32:15.479: INFO: rc: 1 Jan 6 13:32:15.479: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002410270 exit status 1 true [0xc0008769d8 0xc000877070 0xc0008772d0] [0xc0008769d8 0xc000877070 0xc0008772d0] [0xc000877020 0xc0008771a0] [0xba6c50 0xba6c50] 0xc000180120 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:32:25.480: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:32:25.702: INFO: rc: 1 Jan 6 13:32:25.702: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc0024103c0 exit status 1 true [0xc000877370 0xc0008773b8 0xc000877600] [0xc000877370 0xc0008773b8 0xc000877600] [0xc0008773a8 0xc0008775b0] [0xba6c50 0xba6c50] 0xc0020e8a80 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:32:35.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:32:35.895: INFO: rc: 1 Jan 6 13:32:35.896: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002410480 exit status 1 true [0xc000877680 0xc000877708 0xc000877888] [0xc000877680 0xc000877708 0xc000877888] [0xc0008776e0 0xc0008777a0] [0xba6c50 0xba6c50] 0xc0020e9a40 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:32:45.896: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:32:46.083: INFO: rc: 1 Jan 6 13:32:46.083: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc00233a120 exit status 1 true [0xc00039a008 0xc00039a098 0xc00039a2e8] [0xc00039a008 0xc00039a098 0xc00039a2e8] [0xc00039a068 0xc00039a260] [0xba6c50 0xba6c50] 0xc00256e5a0 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:32:56.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:32:56.235: INFO: rc: 1 Jan 6 13:32:56.236: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] [] Error from server (NotFound): pods "ss-2" not found [] 0xc002f46300 exit status 1 true [0xc00146e070 0xc00146e088 0xc00146e0a0] [0xc00146e070 0xc00146e088 0xc00146e0a0] [0xc00146e080 0xc00146e098] [0xba6c50 0xba6c50] 0xc001d84900 }: Command stdout: stderr: Error from server (NotFound): pods "ss-2" not found error: exit status 1 Jan 6 13:33:06.236: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1085 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true' Jan 6 13:33:06.413: INFO: rc: 1 Jan 6 13:33:06.413: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: Jan 6 13:33:06.413: INFO: Scaling statefulset ss to 0 Jan 6 13:33:06.428: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86 Jan 6 13:33:06.431: INFO: Deleting all statefulset in ns statefulset-1085 Jan 6 13:33:06.434: INFO: Scaling statefulset ss to 0 Jan 6 13:33:06.442: INFO: Waiting for statefulset status.replicas updated to 0 Jan 6 13:33:06.444: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:33:06.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1085" for this suite. Jan 6 13:33:14.490: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:33:14.613: INFO: namespace statefulset-1085 deletion completed in 8.142201132s • [SLOW TEST:371.648 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 Burst scaling should run to completion even with unhealthy pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:33:14.614: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1420 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: running the image docker.io/library/nginx:1.14-alpine Jan 6 13:33:14.686: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-1658' Jan 6 13:33:14.848: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Jan 6 13:33:14.848: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n" STEP: verifying the pod controlled by e2e-test-nginx-deployment gets created [AfterEach] [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1426 Jan 6 13:33:16.902: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-1658' Jan 6 13:33:17.033: INFO: stderr: "" Jan 6 13:33:17.033: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:33:17.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1658" for this suite. Jan 6 13:33:23.086: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:33:23.208: INFO: namespace kubectl-1658 deletion completed in 6.154228962s • [SLOW TEST:8.594 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:33:23.209: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0106 13:33:53.412959 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 6 13:33:53.413: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:33:53.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9338" for this suite. Jan 6 13:34:02.472: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:34:03.014: INFO: namespace gc-9338 deletion completed in 9.585260906s • [SLOW TEST:39.806 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:34:03.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change Jan 6 13:34:12.206: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:34:12.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-8578" for this suite. Jan 6 13:34:36.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:34:36.508: INFO: namespace replicaset-8578 deletion completed in 24.221914135s • [SLOW TEST:33.493 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:34:36.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 6 13:34:36.641: INFO: Waiting up to 5m0s for pod "downwardapi-volume-64e50be2-9934-417e-8231-0a40adc95cfd" in namespace "downward-api-4104" to be "success or failure" Jan 6 13:34:36.659: INFO: Pod "downwardapi-volume-64e50be2-9934-417e-8231-0a40adc95cfd": Phase="Pending", Reason="", readiness=false. Elapsed: 17.40207ms Jan 6 13:34:38.678: INFO: Pod "downwardapi-volume-64e50be2-9934-417e-8231-0a40adc95cfd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036138559s Jan 6 13:34:40.692: INFO: Pod "downwardapi-volume-64e50be2-9934-417e-8231-0a40adc95cfd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050711918s Jan 6 13:34:42.707: INFO: Pod "downwardapi-volume-64e50be2-9934-417e-8231-0a40adc95cfd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065571819s Jan 6 13:34:44.716: INFO: Pod "downwardapi-volume-64e50be2-9934-417e-8231-0a40adc95cfd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0748861s STEP: Saw pod success Jan 6 13:34:44.716: INFO: Pod "downwardapi-volume-64e50be2-9934-417e-8231-0a40adc95cfd" satisfied condition "success or failure" Jan 6 13:34:44.720: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-64e50be2-9934-417e-8231-0a40adc95cfd container client-container: STEP: delete the pod Jan 6 13:34:44.782: INFO: Waiting for pod downwardapi-volume-64e50be2-9934-417e-8231-0a40adc95cfd to disappear Jan 6 13:34:44.878: INFO: Pod downwardapi-volume-64e50be2-9934-417e-8231-0a40adc95cfd no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:34:44.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4104" for this suite. Jan 6 13:34:50.916: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:34:51.050: INFO: namespace downward-api-4104 deletion completed in 6.162649415s • [SLOW TEST:14.541 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:34:51.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: getting the auto-created API token STEP: reading a file in the container Jan 6 13:34:59.813: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8384 pod-service-account-7a60ffaf-b795-422a-908e-bbb1ab1ea050 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container Jan 6 13:35:00.392: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8384 pod-service-account-7a60ffaf-b795-422a-908e-bbb1ab1ea050 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container Jan 6 13:35:00.985: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-8384 pod-service-account-7a60ffaf-b795-422a-908e-bbb1ab1ea050 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:35:01.411: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-8384" for this suite. Jan 6 13:35:07.453: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:35:07.616: INFO: namespace svcaccounts-8384 deletion completed in 6.198129152s • [SLOW TEST:16.565 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:35:07.616: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating secret with name projected-secret-test-1fef3117-135a-4a2c-8ebd-677972dfed78 STEP: Creating a pod to test consume secrets Jan 6 13:35:07.739: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cfed9795-597a-4c71-a0c6-47bdd57d5839" in namespace "projected-6749" to be "success or failure" Jan 6 13:35:07.763: INFO: Pod "pod-projected-secrets-cfed9795-597a-4c71-a0c6-47bdd57d5839": Phase="Pending", Reason="", readiness=false. Elapsed: 24.036791ms Jan 6 13:35:09.771: INFO: Pod "pod-projected-secrets-cfed9795-597a-4c71-a0c6-47bdd57d5839": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031774388s Jan 6 13:35:11.782: INFO: Pod "pod-projected-secrets-cfed9795-597a-4c71-a0c6-47bdd57d5839": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042496989s Jan 6 13:35:13.816: INFO: Pod "pod-projected-secrets-cfed9795-597a-4c71-a0c6-47bdd57d5839": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077152874s Jan 6 13:35:15.834: INFO: Pod "pod-projected-secrets-cfed9795-597a-4c71-a0c6-47bdd57d5839": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095121468s Jan 6 13:35:17.840: INFO: Pod "pod-projected-secrets-cfed9795-597a-4c71-a0c6-47bdd57d5839": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.101242637s STEP: Saw pod success Jan 6 13:35:17.840: INFO: Pod "pod-projected-secrets-cfed9795-597a-4c71-a0c6-47bdd57d5839" satisfied condition "success or failure" Jan 6 13:35:17.851: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-cfed9795-597a-4c71-a0c6-47bdd57d5839 container secret-volume-test: STEP: delete the pod Jan 6 13:35:17.916: INFO: Waiting for pod pod-projected-secrets-cfed9795-597a-4c71-a0c6-47bdd57d5839 to disappear Jan 6 13:35:17.922: INFO: Pod pod-projected-secrets-cfed9795-597a-4c71-a0c6-47bdd57d5839 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:35:17.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6749" for this suite. Jan 6 13:35:23.954: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:35:24.048: INFO: namespace projected-6749 deletion completed in 6.11942213s • [SLOW TEST:16.433 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:35:24.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating replication controller my-hostname-basic-33ba9108-1e5a-442c-955b-49b750d0adc6 Jan 6 13:35:24.145: INFO: Pod name my-hostname-basic-33ba9108-1e5a-442c-955b-49b750d0adc6: Found 0 pods out of 1 Jan 6 13:35:29.160: INFO: Pod name my-hostname-basic-33ba9108-1e5a-442c-955b-49b750d0adc6: Found 1 pods out of 1 Jan 6 13:35:29.160: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-33ba9108-1e5a-442c-955b-49b750d0adc6" are running Jan 6 13:35:31.175: INFO: Pod "my-hostname-basic-33ba9108-1e5a-442c-955b-49b750d0adc6-r85qq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-06 13:35:24 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-06 13:35:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-33ba9108-1e5a-442c-955b-49b750d0adc6]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-06 13:35:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-33ba9108-1e5a-442c-955b-49b750d0adc6]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-06 13:35:24 +0000 UTC Reason: Message:}]) Jan 6 13:35:31.175: INFO: Trying to dial the pod Jan 6 13:35:36.212: INFO: Controller my-hostname-basic-33ba9108-1e5a-442c-955b-49b750d0adc6: Got expected result from replica 1 [my-hostname-basic-33ba9108-1e5a-442c-955b-49b750d0adc6-r85qq]: "my-hostname-basic-33ba9108-1e5a-442c-955b-49b750d0adc6-r85qq", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:35:36.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5660" for this suite. Jan 6 13:35:42.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:35:42.344: INFO: namespace replication-controller-5660 deletion completed in 6.125666047s • [SLOW TEST:18.295 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:35:42.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating configMap with name projected-configmap-test-volume-6352d571-6d78-4abc-952e-891158f946fb STEP: Creating a pod to test consume configMaps Jan 6 13:35:42.461: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-1988ea89-13a2-4fa9-bb6f-553fc25fe232" in namespace "projected-4165" to be "success or failure" Jan 6 13:35:42.474: INFO: Pod "pod-projected-configmaps-1988ea89-13a2-4fa9-bb6f-553fc25fe232": Phase="Pending", Reason="", readiness=false. Elapsed: 13.500631ms Jan 6 13:35:44.489: INFO: Pod "pod-projected-configmaps-1988ea89-13a2-4fa9-bb6f-553fc25fe232": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027841388s Jan 6 13:35:46.511: INFO: Pod "pod-projected-configmaps-1988ea89-13a2-4fa9-bb6f-553fc25fe232": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050035864s Jan 6 13:35:48.525: INFO: Pod "pod-projected-configmaps-1988ea89-13a2-4fa9-bb6f-553fc25fe232": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064013858s Jan 6 13:35:50.539: INFO: Pod "pod-projected-configmaps-1988ea89-13a2-4fa9-bb6f-553fc25fe232": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078141364s Jan 6 13:35:52.552: INFO: Pod "pod-projected-configmaps-1988ea89-13a2-4fa9-bb6f-553fc25fe232": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.090832383s STEP: Saw pod success Jan 6 13:35:52.552: INFO: Pod "pod-projected-configmaps-1988ea89-13a2-4fa9-bb6f-553fc25fe232" satisfied condition "success or failure" Jan 6 13:35:52.555: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-1988ea89-13a2-4fa9-bb6f-553fc25fe232 container projected-configmap-volume-test: STEP: delete the pod Jan 6 13:35:52.628: INFO: Waiting for pod pod-projected-configmaps-1988ea89-13a2-4fa9-bb6f-553fc25fe232 to disappear Jan 6 13:35:52.638: INFO: Pod pod-projected-configmaps-1988ea89-13a2-4fa9-bb6f-553fc25fe232 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:35:52.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4165" for this suite. Jan 6 13:35:58.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:35:58.936: INFO: namespace projected-4165 deletion completed in 6.29281699s • [SLOW TEST:16.592 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33 should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:35:58.937: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Jan 6 13:35:59.089: INFO: Number of nodes with available pods: 0 Jan 6 13:35:59.089: INFO: Node iruya-node is running more than one daemon pod Jan 6 13:36:00.391: INFO: Number of nodes with available pods: 0 Jan 6 13:36:00.391: INFO: Node iruya-node is running more than one daemon pod Jan 6 13:36:01.662: INFO: Number of nodes with available pods: 0 Jan 6 13:36:01.662: INFO: Node iruya-node is running more than one daemon pod Jan 6 13:36:02.108: INFO: Number of nodes with available pods: 0 Jan 6 13:36:02.108: INFO: Node iruya-node is running more than one daemon pod Jan 6 13:36:03.129: INFO: Number of nodes with available pods: 0 Jan 6 13:36:03.129: INFO: Node iruya-node is running more than one daemon pod Jan 6 13:36:04.102: INFO: Number of nodes with available pods: 0 Jan 6 13:36:04.102: INFO: Node iruya-node is running more than one daemon pod Jan 6 13:36:05.490: INFO: Number of nodes with available pods: 0 Jan 6 13:36:05.490: INFO: Node iruya-node is running more than one daemon pod Jan 6 13:36:06.113: INFO: Number of nodes with available pods: 0 Jan 6 13:36:06.113: INFO: Node iruya-node is running more than one daemon pod Jan 6 13:36:07.463: INFO: Number of nodes with available pods: 0 Jan 6 13:36:07.463: INFO: Node iruya-node is running more than one daemon pod Jan 6 13:36:08.157: INFO: Number of nodes with available pods: 1 Jan 6 13:36:08.157: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 6 13:36:09.110: INFO: Number of nodes with available pods: 1 Jan 6 13:36:09.110: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 6 13:36:10.109: INFO: Number of nodes with available pods: 2 Jan 6 13:36:10.109: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. Jan 6 13:36:10.239: INFO: Number of nodes with available pods: 1 Jan 6 13:36:10.239: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 6 13:36:11.259: INFO: Number of nodes with available pods: 1 Jan 6 13:36:11.259: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 6 13:36:12.615: INFO: Number of nodes with available pods: 1 Jan 6 13:36:12.615: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 6 13:36:13.264: INFO: Number of nodes with available pods: 1 Jan 6 13:36:13.264: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 6 13:36:14.270: INFO: Number of nodes with available pods: 1 Jan 6 13:36:14.270: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 6 13:36:15.254: INFO: Number of nodes with available pods: 1 Jan 6 13:36:15.254: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 6 13:36:16.839: INFO: Number of nodes with available pods: 1 Jan 6 13:36:16.840: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 6 13:36:17.257: INFO: Number of nodes with available pods: 1 Jan 6 13:36:17.257: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 6 13:36:18.253: INFO: Number of nodes with available pods: 1 Jan 6 13:36:18.254: INFO: Node iruya-server-sfge57q7djm7 is running more than one daemon pod Jan 6 13:36:19.257: INFO: Number of nodes with available pods: 2 Jan 6 13:36:19.257: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8836, will wait for the garbage collector to delete the pods Jan 6 13:36:19.338: INFO: Deleting DaemonSet.extensions daemon-set took: 15.433593ms Jan 6 13:36:19.639: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.757095ms Jan 6 13:36:27.848: INFO: Number of nodes with available pods: 0 Jan 6 13:36:27.848: INFO: Number of running nodes: 0, number of available pods: 0 Jan 6 13:36:27.853: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8836/daemonsets","resourceVersion":"19524467"},"items":null} Jan 6 13:36:27.867: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8836/pods","resourceVersion":"19524467"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:36:27.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-8836" for this suite. Jan 6 13:36:33.951: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:36:34.124: INFO: namespace daemonsets-8836 deletion completed in 6.225199436s • [SLOW TEST:35.187 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:36:34.124: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:36:40.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8154" for this suite. Jan 6 13:36:46.584: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:36:46.754: INFO: namespace namespaces-8154 deletion completed in 6.205439579s STEP: Destroying namespace "nsdeletetest-5489" for this suite. Jan 6 13:36:46.756: INFO: Namespace nsdeletetest-5489 was already deleted STEP: Destroying namespace "nsdeletetest-8899" for this suite. Jan 6 13:36:52.794: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:36:53.022: INFO: namespace nsdeletetest-8899 deletion completed in 6.265854388s • [SLOW TEST:18.898 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:36:53.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating a pod to test downward API volume plugin Jan 6 13:36:53.196: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5f9b2e1c-5eee-44b9-aeae-0ef603d9bdda" in namespace "projected-3074" to be "success or failure" Jan 6 13:36:53.234: INFO: Pod "downwardapi-volume-5f9b2e1c-5eee-44b9-aeae-0ef603d9bdda": Phase="Pending", Reason="", readiness=false. Elapsed: 38.054869ms Jan 6 13:36:55.251: INFO: Pod "downwardapi-volume-5f9b2e1c-5eee-44b9-aeae-0ef603d9bdda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055079918s Jan 6 13:36:57.272: INFO: Pod "downwardapi-volume-5f9b2e1c-5eee-44b9-aeae-0ef603d9bdda": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075682472s Jan 6 13:36:59.315: INFO: Pod "downwardapi-volume-5f9b2e1c-5eee-44b9-aeae-0ef603d9bdda": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119284717s Jan 6 13:37:01.324: INFO: Pod "downwardapi-volume-5f9b2e1c-5eee-44b9-aeae-0ef603d9bdda": Phase="Pending", Reason="", readiness=false. Elapsed: 8.127489322s Jan 6 13:37:03.339: INFO: Pod "downwardapi-volume-5f9b2e1c-5eee-44b9-aeae-0ef603d9bdda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.142604108s STEP: Saw pod success Jan 6 13:37:03.339: INFO: Pod "downwardapi-volume-5f9b2e1c-5eee-44b9-aeae-0ef603d9bdda" satisfied condition "success or failure" Jan 6 13:37:03.348: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5f9b2e1c-5eee-44b9-aeae-0ef603d9bdda container client-container: STEP: delete the pod Jan 6 13:37:03.471: INFO: Waiting for pod downwardapi-volume-5f9b2e1c-5eee-44b9-aeae-0ef603d9bdda to disappear Jan 6 13:37:03.486: INFO: Pod downwardapi-volume-5f9b2e1c-5eee-44b9-aeae-0ef603d9bdda no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:37:03.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3074" for this suite. Jan 6 13:37:09.524: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:37:09.672: INFO: namespace projected-3074 deletion completed in 6.1774183s • [SLOW TEST:16.649 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:37:09.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating the pod Jan 6 13:37:20.439: INFO: Successfully updated pod "annotationupdatea63ee942-0c50-4490-aaf5-1df4c62f3b1b" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:37:22.532: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-41" for this suite. Jan 6 13:37:44.594: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:37:44.722: INFO: namespace projected-41 deletion completed in 22.17009276s • [SLOW TEST:35.049 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:37:44.722: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0106 13:38:26.764505 8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Jan 6 13:38:26.764: INFO: For apiserver_request_total: For apiserver_request_latencies_summary: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:38:26.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-4669" for this suite. Jan 6 13:38:38.792: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:38:38.947: INFO: namespace gc-4669 deletion completed in 12.178357191s • [SLOW TEST:54.225 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:38:38.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: Creating pod pod-subpath-test-downwardapi-9dpg STEP: Creating a pod to test atomic-volume-subpath Jan 6 13:38:43.091: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-9dpg" in namespace "subpath-3326" to be "success or failure" Jan 6 13:38:43.144: INFO: Pod "pod-subpath-test-downwardapi-9dpg": Phase="Pending", Reason="", readiness=false. Elapsed: 52.9327ms Jan 6 13:38:45.190: INFO: Pod "pod-subpath-test-downwardapi-9dpg": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099049022s Jan 6 13:38:47.203: INFO: Pod "pod-subpath-test-downwardapi-9dpg": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112416403s Jan 6 13:38:49.214: INFO: Pod "pod-subpath-test-downwardapi-9dpg": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123127347s Jan 6 13:38:51.229: INFO: Pod "pod-subpath-test-downwardapi-9dpg": Phase="Running", Reason="", readiness=true. Elapsed: 8.138283934s Jan 6 13:38:53.241: INFO: Pod "pod-subpath-test-downwardapi-9dpg": Phase="Running", Reason="", readiness=true. Elapsed: 10.150378434s Jan 6 13:38:55.253: INFO: Pod "pod-subpath-test-downwardapi-9dpg": Phase="Running", Reason="", readiness=true. Elapsed: 12.162583707s Jan 6 13:38:57.271: INFO: Pod "pod-subpath-test-downwardapi-9dpg": Phase="Running", Reason="", readiness=true. Elapsed: 14.180569576s Jan 6 13:38:59.284: INFO: Pod "pod-subpath-test-downwardapi-9dpg": Phase="Running", Reason="", readiness=true. Elapsed: 16.193087144s Jan 6 13:39:01.294: INFO: Pod "pod-subpath-test-downwardapi-9dpg": Phase="Running", Reason="", readiness=true. Elapsed: 18.203690709s Jan 6 13:39:03.305: INFO: Pod "pod-subpath-test-downwardapi-9dpg": Phase="Running", Reason="", readiness=true. Elapsed: 20.214200145s Jan 6 13:39:05.316: INFO: Pod "pod-subpath-test-downwardapi-9dpg": Phase="Running", Reason="", readiness=true. Elapsed: 22.225101091s Jan 6 13:39:07.332: INFO: Pod "pod-subpath-test-downwardapi-9dpg": Phase="Running", Reason="", readiness=true. Elapsed: 24.241414081s Jan 6 13:39:09.340: INFO: Pod "pod-subpath-test-downwardapi-9dpg": Phase="Running", Reason="", readiness=true. Elapsed: 26.249013889s Jan 6 13:39:11.354: INFO: Pod "pod-subpath-test-downwardapi-9dpg": Phase="Running", Reason="", readiness=true. Elapsed: 28.263539083s Jan 6 13:39:13.363: INFO: Pod "pod-subpath-test-downwardapi-9dpg": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.272765268s STEP: Saw pod success Jan 6 13:39:13.363: INFO: Pod "pod-subpath-test-downwardapi-9dpg" satisfied condition "success or failure" Jan 6 13:39:13.369: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-downwardapi-9dpg container test-container-subpath-downwardapi-9dpg: STEP: delete the pod Jan 6 13:39:13.514: INFO: Waiting for pod pod-subpath-test-downwardapi-9dpg to disappear Jan 6 13:39:13.526: INFO: Pod pod-subpath-test-downwardapi-9dpg no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-9dpg Jan 6 13:39:13.526: INFO: Deleting pod "pod-subpath-test-downwardapi-9dpg" in namespace "subpath-3326" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:39:13.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-3326" for this suite. Jan 6 13:39:19.564: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:39:19.715: INFO: namespace subpath-3326 deletion completed in 6.177405612s • [SLOW TEST:40.768 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSS ------------------------------ [sig-cli] Kubectl client [k8s.io] Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:39:19.715: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating all guestbook components Jan 6 13:39:19.788: INFO: apiVersion: v1 kind: Service metadata: name: redis-slave labels: app: redis role: slave tier: backend spec: ports: - port: 6379 selector: app: redis role: slave tier: backend Jan 6 13:39:19.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-412' Jan 6 13:39:20.328: INFO: stderr: "" Jan 6 13:39:20.328: INFO: stdout: "service/redis-slave created\n" Jan 6 13:39:20.329: INFO: apiVersion: v1 kind: Service metadata: name: redis-master labels: app: redis role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: redis role: master tier: backend Jan 6 13:39:20.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-412' Jan 6 13:39:21.175: INFO: stderr: "" Jan 6 13:39:21.175: INFO: stdout: "service/redis-master created\n" Jan 6 13:39:21.176: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend Jan 6 13:39:21.176: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-412' Jan 6 13:39:21.741: INFO: stderr: "" Jan 6 13:39:21.742: INFO: stdout: "service/frontend created\n" Jan 6 13:39:21.742: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v6 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access environment variables to find service host # info, comment out the 'value: dns' line above, and uncomment the # line below: # value: env ports: - containerPort: 80 Jan 6 13:39:21.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-412' Jan 6 13:39:22.222: INFO: stderr: "" Jan 6 13:39:22.222: INFO: stdout: "deployment.apps/frontend created\n" Jan 6 13:39:22.223: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-master spec: replicas: 1 selector: matchLabels: app: redis role: master tier: backend template: metadata: labels: app: redis role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/redis:1.0 resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 Jan 6 13:39:22.223: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-412' Jan 6 13:39:22.709: INFO: stderr: "" Jan 6 13:39:22.710: INFO: stdout: "deployment.apps/redis-master created\n" Jan 6 13:39:22.710: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: redis-slave spec: replicas: 2 selector: matchLabels: app: redis role: slave tier: backend template: metadata: labels: app: redis role: slave tier: backend spec: containers: - name: slave image: gcr.io/google-samples/gb-redisslave:v3 resources: requests: cpu: 100m memory: 100Mi env: - name: GET_HOSTS_FROM value: dns # If your cluster config does not include a dns service, then to # instead access an environment variable to find the master # service's host, comment out the 'value: dns' line above, and # uncomment the line below: # value: env ports: - containerPort: 6379 Jan 6 13:39:22.711: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-412' Jan 6 13:39:23.736: INFO: stderr: "" Jan 6 13:39:23.736: INFO: stdout: "deployment.apps/redis-slave created\n" STEP: validating guestbook app Jan 6 13:39:23.736: INFO: Waiting for all frontend pods to be Running. Jan 6 13:39:48.789: INFO: Waiting for frontend to serve content. Jan 6 13:39:48.944: INFO: Trying to add a new entry to the guestbook. Jan 6 13:39:49.003: INFO: Verifying that added entry can be retrieved. Jan 6 13:39:49.104: INFO: Failed to get response from guestbook. err: , response: {"data": ""} STEP: using delete to clean up resources Jan 6 13:39:54.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-412' Jan 6 13:39:54.491: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 6 13:39:54.492: INFO: stdout: "service \"redis-slave\" force deleted\n" STEP: using delete to clean up resources Jan 6 13:39:54.493: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-412' Jan 6 13:39:54.714: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 6 13:39:54.714: INFO: stdout: "service \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 6 13:39:54.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-412' Jan 6 13:39:54.993: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 6 13:39:54.993: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 6 13:39:54.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-412' Jan 6 13:39:55.147: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 6 13:39:55.147: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources Jan 6 13:39:55.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-412' Jan 6 13:39:55.251: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 6 13:39:55.251: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n" STEP: using delete to clean up resources Jan 6 13:39:55.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-412' Jan 6 13:39:55.406: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 6 13:39:55.406: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:39:55.407: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-412" for this suite. Jan 6 13:40:39.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:40:39.833: INFO: namespace kubectl-412 deletion completed in 44.409522751s • [SLOW TEST:80.117 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SS ------------------------------ [sig-cli] Kubectl client [k8s.io] Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:40:39.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221 [BeforeEach] [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 STEP: creating a replication controller Jan 6 13:40:39.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3924' Jan 6 13:40:40.283: INFO: stderr: "" Jan 6 13:40:40.283: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 6 13:40:40.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3924' Jan 6 13:40:40.714: INFO: stderr: "" Jan 6 13:40:40.714: INFO: stdout: "update-demo-nautilus-85gjs update-demo-nautilus-k4t6f " Jan 6 13:40:40.714: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85gjs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:40:40.960: INFO: stderr: "" Jan 6 13:40:40.960: INFO: stdout: "" Jan 6 13:40:40.960: INFO: update-demo-nautilus-85gjs is created but not running Jan 6 13:40:45.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3924' Jan 6 13:40:46.100: INFO: stderr: "" Jan 6 13:40:46.100: INFO: stdout: "update-demo-nautilus-85gjs update-demo-nautilus-k4t6f " Jan 6 13:40:46.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85gjs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:40:46.301: INFO: stderr: "" Jan 6 13:40:46.301: INFO: stdout: "" Jan 6 13:40:46.301: INFO: update-demo-nautilus-85gjs is created but not running Jan 6 13:40:51.302: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3924' Jan 6 13:40:51.486: INFO: stderr: "" Jan 6 13:40:51.486: INFO: stdout: "update-demo-nautilus-85gjs update-demo-nautilus-k4t6f " Jan 6 13:40:51.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85gjs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:40:51.639: INFO: stderr: "" Jan 6 13:40:51.639: INFO: stdout: "true" Jan 6 13:40:51.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85gjs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:40:51.784: INFO: stderr: "" Jan 6 13:40:51.784: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 6 13:40:51.784: INFO: validating pod update-demo-nautilus-85gjs Jan 6 13:40:51.824: INFO: got data: { "image": "nautilus.jpg" } Jan 6 13:40:51.824: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 6 13:40:51.824: INFO: update-demo-nautilus-85gjs is verified up and running Jan 6 13:40:51.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k4t6f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:40:51.931: INFO: stderr: "" Jan 6 13:40:51.931: INFO: stdout: "" Jan 6 13:40:51.931: INFO: update-demo-nautilus-k4t6f is created but not running Jan 6 13:40:56.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3924' Jan 6 13:40:57.115: INFO: stderr: "" Jan 6 13:40:57.115: INFO: stdout: "update-demo-nautilus-85gjs update-demo-nautilus-k4t6f " Jan 6 13:40:57.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85gjs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:40:57.261: INFO: stderr: "" Jan 6 13:40:57.261: INFO: stdout: "true" Jan 6 13:40:57.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85gjs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:40:57.381: INFO: stderr: "" Jan 6 13:40:57.381: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 6 13:40:57.381: INFO: validating pod update-demo-nautilus-85gjs Jan 6 13:40:57.391: INFO: got data: { "image": "nautilus.jpg" } Jan 6 13:40:57.391: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 6 13:40:57.391: INFO: update-demo-nautilus-85gjs is verified up and running Jan 6 13:40:57.392: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k4t6f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:40:57.523: INFO: stderr: "" Jan 6 13:40:57.523: INFO: stdout: "true" Jan 6 13:40:57.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-k4t6f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:40:57.663: INFO: stderr: "" Jan 6 13:40:57.663: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 6 13:40:57.663: INFO: validating pod update-demo-nautilus-k4t6f Jan 6 13:40:57.676: INFO: got data: { "image": "nautilus.jpg" } Jan 6 13:40:57.676: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 6 13:40:57.676: INFO: update-demo-nautilus-k4t6f is verified up and running STEP: scaling down the replication controller Jan 6 13:40:57.678: INFO: scanned /root for discovery docs: Jan 6 13:40:57.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-3924' Jan 6 13:40:58.925: INFO: stderr: "" Jan 6 13:40:58.925: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 6 13:40:58.926: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3924' Jan 6 13:40:59.130: INFO: stderr: "" Jan 6 13:40:59.130: INFO: stdout: "update-demo-nautilus-85gjs update-demo-nautilus-k4t6f " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 6 13:41:04.131: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3924' Jan 6 13:41:04.279: INFO: stderr: "" Jan 6 13:41:04.279: INFO: stdout: "update-demo-nautilus-85gjs update-demo-nautilus-k4t6f " STEP: Replicas for name=update-demo: expected=1 actual=2 Jan 6 13:41:09.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3924' Jan 6 13:41:09.461: INFO: stderr: "" Jan 6 13:41:09.461: INFO: stdout: "update-demo-nautilus-85gjs " Jan 6 13:41:09.461: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85gjs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:41:09.622: INFO: stderr: "" Jan 6 13:41:09.622: INFO: stdout: "true" Jan 6 13:41:09.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85gjs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:41:09.742: INFO: stderr: "" Jan 6 13:41:09.742: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 6 13:41:09.742: INFO: validating pod update-demo-nautilus-85gjs Jan 6 13:41:09.755: INFO: got data: { "image": "nautilus.jpg" } Jan 6 13:41:09.755: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 6 13:41:09.755: INFO: update-demo-nautilus-85gjs is verified up and running STEP: scaling up the replication controller Jan 6 13:41:09.758: INFO: scanned /root for discovery docs: Jan 6 13:41:09.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-3924' Jan 6 13:41:11.005: INFO: stderr: "" Jan 6 13:41:11.006: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 6 13:41:11.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3924' Jan 6 13:41:11.377: INFO: stderr: "" Jan 6 13:41:11.377: INFO: stdout: "update-demo-nautilus-85gjs update-demo-nautilus-stmvf " Jan 6 13:41:11.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85gjs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:41:11.823: INFO: stderr: "" Jan 6 13:41:11.823: INFO: stdout: "true" Jan 6 13:41:11.824: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85gjs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:41:11.982: INFO: stderr: "" Jan 6 13:41:11.982: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 6 13:41:11.982: INFO: validating pod update-demo-nautilus-85gjs Jan 6 13:41:11.992: INFO: got data: { "image": "nautilus.jpg" } Jan 6 13:41:11.992: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 6 13:41:11.992: INFO: update-demo-nautilus-85gjs is verified up and running Jan 6 13:41:11.992: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-stmvf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:41:12.110: INFO: stderr: "" Jan 6 13:41:12.111: INFO: stdout: "" Jan 6 13:41:12.111: INFO: update-demo-nautilus-stmvf is created but not running Jan 6 13:41:17.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-3924' Jan 6 13:41:17.307: INFO: stderr: "" Jan 6 13:41:17.307: INFO: stdout: "update-demo-nautilus-85gjs update-demo-nautilus-stmvf " Jan 6 13:41:17.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85gjs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:41:17.437: INFO: stderr: "" Jan 6 13:41:17.437: INFO: stdout: "true" Jan 6 13:41:17.437: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-85gjs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:41:17.536: INFO: stderr: "" Jan 6 13:41:17.536: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 6 13:41:17.536: INFO: validating pod update-demo-nautilus-85gjs Jan 6 13:41:17.545: INFO: got data: { "image": "nautilus.jpg" } Jan 6 13:41:17.545: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 6 13:41:17.545: INFO: update-demo-nautilus-85gjs is verified up and running Jan 6 13:41:17.545: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-stmvf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:41:17.658: INFO: stderr: "" Jan 6 13:41:17.658: INFO: stdout: "true" Jan 6 13:41:17.659: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-stmvf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3924' Jan 6 13:41:17.835: INFO: stderr: "" Jan 6 13:41:17.835: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 6 13:41:17.835: INFO: validating pod update-demo-nautilus-stmvf Jan 6 13:41:17.844: INFO: got data: { "image": "nautilus.jpg" } Jan 6 13:41:17.844: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 6 13:41:17.844: INFO: update-demo-nautilus-stmvf is verified up and running STEP: using delete to clean up resources Jan 6 13:41:17.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-3924' Jan 6 13:41:18.019: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 6 13:41:18.020: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" Jan 6 13:41:18.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-3924' Jan 6 13:41:18.136: INFO: stderr: "No resources found.\n" Jan 6 13:41:18.136: INFO: stdout: "" Jan 6 13:41:18.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-3924 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 6 13:41:18.278: INFO: stderr: "" Jan 6 13:41:18.279: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:41:18.279: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3924" for this suite. Jan 6 13:41:40.330: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:41:40.431: INFO: namespace kubectl-3924 deletion completed in 22.126186044s • [SLOW TEST:60.598 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 [k8s.io] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:41:40.431: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 6 13:41:40.504: INFO: Creating deployment "test-recreate-deployment" Jan 6 13:41:40.517: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 Jan 6 13:41:40.583: INFO: deployment "test-recreate-deployment" doesn't have the required revision set Jan 6 13:41:42.604: INFO: Waiting deployment "test-recreate-deployment" to complete Jan 6 13:41:42.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713914900, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713914900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713914900, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713914900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 13:41:44.613: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713914900, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713914900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713914900, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713914900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 13:41:46.627: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713914900, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713914900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713914900, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713914900, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-6df85df6b9\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 6 13:41:48.617: INFO: Triggering a new rollout for deployment "test-recreate-deployment" Jan 6 13:41:48.631: INFO: Updating deployment test-recreate-deployment Jan 6 13:41:48.631: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66 Jan 6 13:41:48.938: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment,GenerateName:,Namespace:deployment-8767,SelfLink:/apis/apps/v1/namespaces/deployment-8767/deployments/test-recreate-deployment,UID:aa80393e-d19d-4cae-8beb-e650d6299ca4,ResourceVersion:19525508,Generation:2,CreationTimestamp:2020-01-06 13:41:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[{Available False 2020-01-06 13:41:48 +0000 UTC 2020-01-06 13:41:48 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.} {Progressing True 2020-01-06 13:41:48 +0000 UTC 2020-01-06 13:41:40 +0000 UTC ReplicaSetUpdated ReplicaSet "test-recreate-deployment-5c8c9cc69d" is progressing.}],ReadyReplicas:0,CollisionCount:nil,},} Jan 6 13:41:48.946: INFO: New ReplicaSet "test-recreate-deployment-5c8c9cc69d" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d,GenerateName:,Namespace:deployment-8767,SelfLink:/apis/apps/v1/namespaces/deployment-8767/replicasets/test-recreate-deployment-5c8c9cc69d,UID:3fa70498-8042-465a-9414-c9ef41801d39,ResourceVersion:19525506,Generation:1,CreationTimestamp:2020-01-06 13:41:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment aa80393e-d19d-4cae-8beb-e650d6299ca4 0xc0023b26f7 0xc0023b26f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 6 13:41:48.946: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": Jan 6 13:41:48.947: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-6df85df6b9,GenerateName:,Namespace:deployment-8767,SelfLink:/apis/apps/v1/namespaces/deployment-8767/replicasets/test-recreate-deployment-6df85df6b9,UID:cd08756f-671f-4c93-8711-43297a8e612a,ResourceVersion:19525496,Generation:2,CreationTimestamp:2020-01-06 13:41:40 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 1,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment test-recreate-deployment aa80393e-d19d-4cae-8beb-e650d6299ca4 0xc0023b27c7 0xc0023b27c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 6df85df6b9,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] [] [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},} Jan 6 13:41:48.968: INFO: Pod "test-recreate-deployment-5c8c9cc69d-fqm7f" is not available: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-recreate-deployment-5c8c9cc69d-fqm7f,GenerateName:test-recreate-deployment-5c8c9cc69d-,Namespace:deployment-8767,SelfLink:/api/v1/namespaces/deployment-8767/pods/test-recreate-deployment-5c8c9cc69d-fqm7f,UID:4f6ebf60-3a8f-4bf8-9262-f9d873e1d6bc,ResourceVersion:19525503,Generation:0,CreationTimestamp:2020-01-06 13:41:48 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod-3,pod-template-hash: 5c8c9cc69d,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-recreate-deployment-5c8c9cc69d 3fa70498-8042-465a-9414-c9ef41801d39 0xc000bdc257 0xc000bdc258}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-8gw95 {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-8gw95,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] [] [] [] [] {map[] map[]} [{default-token-8gw95 true /var/run/secrets/kubernetes.io/serviceaccount }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists NoExecute 0xc000bdc2d0} {node.kubernetes.io/unreachable Exists NoExecute 0xc000bdc300}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:41:48 +0000 UTC }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 Jan 6 13:41:48.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8767" for this suite. Jan 6 13:41:55.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Jan 6 13:41:55.363: INFO: namespace deployment-8767 deletion completed in 6.221243368s • [SLOW TEST:14.932 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150 STEP: Creating a kubernetes client Jan 6 13:41:55.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697 Jan 6 13:41:55.584: INFO: (0) /api/v1/nodes/iruya-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 13.541806ms)
Jan  6 13:41:55.589: INFO: (1) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.498113ms)
Jan  6 13:41:55.595: INFO: (2) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.804947ms)
Jan  6 13:41:55.599: INFO: (3) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.093492ms)
Jan  6 13:41:55.604: INFO: (4) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.570041ms)
Jan  6 13:41:55.642: INFO: (5) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 37.691264ms)
Jan  6 13:41:55.649: INFO: (6) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.899517ms)
Jan  6 13:41:55.655: INFO: (7) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.780082ms)
Jan  6 13:41:55.665: INFO: (8) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.843821ms)
Jan  6 13:41:55.671: INFO: (9) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.49042ms)
Jan  6 13:41:55.680: INFO: (10) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.210484ms)
Jan  6 13:41:55.687: INFO: (11) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.585392ms)
Jan  6 13:41:55.694: INFO: (12) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.912101ms)
Jan  6 13:41:55.700: INFO: (13) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.305224ms)
Jan  6 13:41:55.707: INFO: (14) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.95778ms)
Jan  6 13:41:55.715: INFO: (15) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.212284ms)
Jan  6 13:41:55.723: INFO: (16) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.539135ms)
Jan  6 13:41:55.729: INFO: (17) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.057343ms)
Jan  6 13:41:55.734: INFO: (18) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.119428ms)
Jan  6 13:41:55.738: INFO: (19) /api/v1/nodes/iruya-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.05429ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:41:55.738: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-4210" for this suite.
Jan  6 13:42:01.779: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:42:01.924: INFO: namespace proxy-4210 deletion completed in 6.181552044s

• [SLOW TEST:6.560 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:42:01.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  6 13:42:09.311: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:42:09.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-7739" for this suite.
Jan  6 13:42:15.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:42:15.611: INFO: namespace container-runtime-7739 deletion completed in 6.220886931s

• [SLOW TEST:13.687 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:42:15.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-dc84979e-7d35-4d43-bd6d-629092b3bebb
STEP: Creating a pod to test consume secrets
Jan  6 13:42:15.800: INFO: Waiting up to 5m0s for pod "pod-secrets-73c18ae9-b14e-4552-a5be-159afb43c08c" in namespace "secrets-6450" to be "success or failure"
Jan  6 13:42:15.806: INFO: Pod "pod-secrets-73c18ae9-b14e-4552-a5be-159afb43c08c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.198992ms
Jan  6 13:42:17.819: INFO: Pod "pod-secrets-73c18ae9-b14e-4552-a5be-159afb43c08c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019202285s
Jan  6 13:42:19.827: INFO: Pod "pod-secrets-73c18ae9-b14e-4552-a5be-159afb43c08c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026972397s
Jan  6 13:42:21.841: INFO: Pod "pod-secrets-73c18ae9-b14e-4552-a5be-159afb43c08c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04145693s
Jan  6 13:42:23.878: INFO: Pod "pod-secrets-73c18ae9-b14e-4552-a5be-159afb43c08c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078115198s
STEP: Saw pod success
Jan  6 13:42:23.878: INFO: Pod "pod-secrets-73c18ae9-b14e-4552-a5be-159afb43c08c" satisfied condition "success or failure"
Jan  6 13:42:23.902: INFO: Trying to get logs from node iruya-node pod pod-secrets-73c18ae9-b14e-4552-a5be-159afb43c08c container secret-volume-test: 
STEP: delete the pod
Jan  6 13:42:24.106: INFO: Waiting for pod pod-secrets-73c18ae9-b14e-4552-a5be-159afb43c08c to disappear
Jan  6 13:42:24.122: INFO: Pod pod-secrets-73c18ae9-b14e-4552-a5be-159afb43c08c no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:42:24.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6450" for this suite.
Jan  6 13:42:30.290: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:42:30.466: INFO: namespace secrets-6450 deletion completed in 6.334004866s

• [SLOW TEST:14.855 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:42:30.467: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-3670
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  6 13:42:30.560: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  6 13:43:00.835: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-3670 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 13:43:00.835: INFO: >>> kubeConfig: /root/.kube/config
I0106 13:43:01.002370       8 log.go:172] (0xc001e3e4d0) (0xc0024610e0) Create stream
I0106 13:43:01.002714       8 log.go:172] (0xc001e3e4d0) (0xc0024610e0) Stream added, broadcasting: 1
I0106 13:43:01.015518       8 log.go:172] (0xc001e3e4d0) Reply frame received for 1
I0106 13:43:01.015609       8 log.go:172] (0xc001e3e4d0) (0xc001bc9360) Create stream
I0106 13:43:01.015624       8 log.go:172] (0xc001e3e4d0) (0xc001bc9360) Stream added, broadcasting: 3
I0106 13:43:01.017525       8 log.go:172] (0xc001e3e4d0) Reply frame received for 3
I0106 13:43:01.017569       8 log.go:172] (0xc001e3e4d0) (0xc0022c8dc0) Create stream
I0106 13:43:01.017586       8 log.go:172] (0xc001e3e4d0) (0xc0022c8dc0) Stream added, broadcasting: 5
I0106 13:43:01.018732       8 log.go:172] (0xc001e3e4d0) Reply frame received for 5
I0106 13:43:01.457181       8 log.go:172] (0xc001e3e4d0) Data frame received for 3
I0106 13:43:01.457303       8 log.go:172] (0xc001bc9360) (3) Data frame handling
I0106 13:43:01.457333       8 log.go:172] (0xc001bc9360) (3) Data frame sent
I0106 13:43:01.705552       8 log.go:172] (0xc001e3e4d0) (0xc001bc9360) Stream removed, broadcasting: 3
I0106 13:43:01.705683       8 log.go:172] (0xc001e3e4d0) Data frame received for 1
I0106 13:43:01.705695       8 log.go:172] (0xc0024610e0) (1) Data frame handling
I0106 13:43:01.705722       8 log.go:172] (0xc0024610e0) (1) Data frame sent
I0106 13:43:01.705819       8 log.go:172] (0xc001e3e4d0) (0xc0024610e0) Stream removed, broadcasting: 1
I0106 13:43:01.705887       8 log.go:172] (0xc001e3e4d0) (0xc0022c8dc0) Stream removed, broadcasting: 5
I0106 13:43:01.705979       8 log.go:172] (0xc001e3e4d0) Go away received
I0106 13:43:01.706165       8 log.go:172] (0xc001e3e4d0) (0xc0024610e0) Stream removed, broadcasting: 1
I0106 13:43:01.706186       8 log.go:172] (0xc001e3e4d0) (0xc001bc9360) Stream removed, broadcasting: 3
I0106 13:43:01.706216       8 log.go:172] (0xc001e3e4d0) (0xc0022c8dc0) Stream removed, broadcasting: 5
Jan  6 13:43:01.706: INFO: Waiting for endpoints: map[]
Jan  6 13:43:01.722: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-3670 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 13:43:01.723: INFO: >>> kubeConfig: /root/.kube/config
I0106 13:43:01.824277       8 log.go:172] (0xc001e3edc0) (0xc0024612c0) Create stream
I0106 13:43:01.824330       8 log.go:172] (0xc001e3edc0) (0xc0024612c0) Stream added, broadcasting: 1
I0106 13:43:01.834905       8 log.go:172] (0xc001e3edc0) Reply frame received for 1
I0106 13:43:01.834958       8 log.go:172] (0xc001e3edc0) (0xc001bc9900) Create stream
I0106 13:43:01.834968       8 log.go:172] (0xc001e3edc0) (0xc001bc9900) Stream added, broadcasting: 3
I0106 13:43:01.836647       8 log.go:172] (0xc001e3edc0) Reply frame received for 3
I0106 13:43:01.836681       8 log.go:172] (0xc001e3edc0) (0xc003037180) Create stream
I0106 13:43:01.836696       8 log.go:172] (0xc001e3edc0) (0xc003037180) Stream added, broadcasting: 5
I0106 13:43:01.837848       8 log.go:172] (0xc001e3edc0) Reply frame received for 5
I0106 13:43:02.060179       8 log.go:172] (0xc001e3edc0) Data frame received for 3
I0106 13:43:02.060282       8 log.go:172] (0xc001bc9900) (3) Data frame handling
I0106 13:43:02.060335       8 log.go:172] (0xc001bc9900) (3) Data frame sent
I0106 13:43:02.247526       8 log.go:172] (0xc001e3edc0) (0xc001bc9900) Stream removed, broadcasting: 3
I0106 13:43:02.247914       8 log.go:172] (0xc001e3edc0) Data frame received for 1
I0106 13:43:02.248031       8 log.go:172] (0xc001e3edc0) (0xc003037180) Stream removed, broadcasting: 5
I0106 13:43:02.248111       8 log.go:172] (0xc0024612c0) (1) Data frame handling
I0106 13:43:02.248160       8 log.go:172] (0xc0024612c0) (1) Data frame sent
I0106 13:43:02.248171       8 log.go:172] (0xc001e3edc0) (0xc0024612c0) Stream removed, broadcasting: 1
I0106 13:43:02.248208       8 log.go:172] (0xc001e3edc0) Go away received
I0106 13:43:02.248777       8 log.go:172] (0xc001e3edc0) (0xc0024612c0) Stream removed, broadcasting: 1
I0106 13:43:02.248792       8 log.go:172] (0xc001e3edc0) (0xc001bc9900) Stream removed, broadcasting: 3
I0106 13:43:02.248804       8 log.go:172] (0xc001e3edc0) (0xc003037180) Stream removed, broadcasting: 5
Jan  6 13:43:02.248: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:43:02.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-3670" for this suite.
Jan  6 13:43:18.284: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:43:18.451: INFO: namespace pod-network-test-3670 deletion completed in 16.191618139s

• [SLOW TEST:47.984 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:43:18.451: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with configMap that has name projected-configmap-test-upd-6a416f88-62d3-4123-b793-4c75ee5422b0
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-6a416f88-62d3-4123-b793-4c75ee5422b0
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:43:28.711: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5710" for this suite.
Jan  6 13:43:50.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:43:50.888: INFO: namespace projected-5710 deletion completed in 22.172178914s

• [SLOW TEST:32.437 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:43:50.889: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  6 13:43:51.020: INFO: Waiting up to 5m0s for pod "pod-ca5384d5-6e52-4b6a-a8bf-358082f98431" in namespace "emptydir-9193" to be "success or failure"
Jan  6 13:43:51.027: INFO: Pod "pod-ca5384d5-6e52-4b6a-a8bf-358082f98431": Phase="Pending", Reason="", readiness=false. Elapsed: 6.583295ms
Jan  6 13:43:53.041: INFO: Pod "pod-ca5384d5-6e52-4b6a-a8bf-358082f98431": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020827091s
Jan  6 13:43:55.063: INFO: Pod "pod-ca5384d5-6e52-4b6a-a8bf-358082f98431": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042614121s
Jan  6 13:43:57.071: INFO: Pod "pod-ca5384d5-6e52-4b6a-a8bf-358082f98431": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051236034s
Jan  6 13:43:59.081: INFO: Pod "pod-ca5384d5-6e52-4b6a-a8bf-358082f98431": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.060322132s
STEP: Saw pod success
Jan  6 13:43:59.081: INFO: Pod "pod-ca5384d5-6e52-4b6a-a8bf-358082f98431" satisfied condition "success or failure"
Jan  6 13:43:59.084: INFO: Trying to get logs from node iruya-node pod pod-ca5384d5-6e52-4b6a-a8bf-358082f98431 container test-container: 
STEP: delete the pod
Jan  6 13:43:59.130: INFO: Waiting for pod pod-ca5384d5-6e52-4b6a-a8bf-358082f98431 to disappear
Jan  6 13:43:59.136: INFO: Pod pod-ca5384d5-6e52-4b6a-a8bf-358082f98431 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:43:59.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9193" for this suite.
Jan  6 13:44:05.163: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:44:05.292: INFO: namespace emptydir-9193 deletion completed in 6.152099921s

• [SLOW TEST:14.404 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:44:05.293: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  6 13:44:05.393: INFO: Waiting up to 5m0s for pod "downwardapi-volume-5ef4bc64-2538-4291-a051-7654c02c3924" in namespace "downward-api-4946" to be "success or failure"
Jan  6 13:44:05.463: INFO: Pod "downwardapi-volume-5ef4bc64-2538-4291-a051-7654c02c3924": Phase="Pending", Reason="", readiness=false. Elapsed: 70.388406ms
Jan  6 13:44:07.472: INFO: Pod "downwardapi-volume-5ef4bc64-2538-4291-a051-7654c02c3924": Phase="Pending", Reason="", readiness=false. Elapsed: 2.079468384s
Jan  6 13:44:09.494: INFO: Pod "downwardapi-volume-5ef4bc64-2538-4291-a051-7654c02c3924": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101363381s
Jan  6 13:44:11.505: INFO: Pod "downwardapi-volume-5ef4bc64-2538-4291-a051-7654c02c3924": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112372513s
Jan  6 13:44:13.517: INFO: Pod "downwardapi-volume-5ef4bc64-2538-4291-a051-7654c02c3924": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.124446587s
STEP: Saw pod success
Jan  6 13:44:13.517: INFO: Pod "downwardapi-volume-5ef4bc64-2538-4291-a051-7654c02c3924" satisfied condition "success or failure"
Jan  6 13:44:13.521: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-5ef4bc64-2538-4291-a051-7654c02c3924 container client-container: 
STEP: delete the pod
Jan  6 13:44:13.642: INFO: Waiting for pod downwardapi-volume-5ef4bc64-2538-4291-a051-7654c02c3924 to disappear
Jan  6 13:44:13.649: INFO: Pod downwardapi-volume-5ef4bc64-2538-4291-a051-7654c02c3924 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:44:13.649: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4946" for this suite.
Jan  6 13:44:19.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:44:19.817: INFO: namespace downward-api-4946 deletion completed in 6.153911954s

• [SLOW TEST:14.524 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:44:19.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override arguments
Jan  6 13:44:19.947: INFO: Waiting up to 5m0s for pod "client-containers-54305a0a-f656-4c84-98bf-6ed4716f380f" in namespace "containers-333" to be "success or failure"
Jan  6 13:44:19.953: INFO: Pod "client-containers-54305a0a-f656-4c84-98bf-6ed4716f380f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.822331ms
Jan  6 13:44:21.964: INFO: Pod "client-containers-54305a0a-f656-4c84-98bf-6ed4716f380f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016060616s
Jan  6 13:44:23.980: INFO: Pod "client-containers-54305a0a-f656-4c84-98bf-6ed4716f380f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032765258s
Jan  6 13:44:25.993: INFO: Pod "client-containers-54305a0a-f656-4c84-98bf-6ed4716f380f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045284686s
Jan  6 13:44:28.000: INFO: Pod "client-containers-54305a0a-f656-4c84-98bf-6ed4716f380f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052996654s
STEP: Saw pod success
Jan  6 13:44:28.001: INFO: Pod "client-containers-54305a0a-f656-4c84-98bf-6ed4716f380f" satisfied condition "success or failure"
Jan  6 13:44:28.005: INFO: Trying to get logs from node iruya-node pod client-containers-54305a0a-f656-4c84-98bf-6ed4716f380f container test-container: 
STEP: delete the pod
Jan  6 13:44:28.049: INFO: Waiting for pod client-containers-54305a0a-f656-4c84-98bf-6ed4716f380f to disappear
Jan  6 13:44:28.053: INFO: Pod client-containers-54305a0a-f656-4c84-98bf-6ed4716f380f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:44:28.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-333" for this suite.
Jan  6 13:44:34.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:44:34.189: INFO: namespace containers-333 deletion completed in 6.130588297s

• [SLOW TEST:14.372 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:44:34.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-0c566bdb-df64-493c-9be2-2dd9ae392fd8
STEP: Creating a pod to test consume secrets
Jan  6 13:44:34.340: INFO: Waiting up to 5m0s for pod "pod-secrets-b7faa15e-7667-4d7e-9ebc-b48e368bc519" in namespace "secrets-8917" to be "success or failure"
Jan  6 13:44:34.400: INFO: Pod "pod-secrets-b7faa15e-7667-4d7e-9ebc-b48e368bc519": Phase="Pending", Reason="", readiness=false. Elapsed: 59.279163ms
Jan  6 13:44:36.426: INFO: Pod "pod-secrets-b7faa15e-7667-4d7e-9ebc-b48e368bc519": Phase="Pending", Reason="", readiness=false. Elapsed: 2.085574379s
Jan  6 13:44:38.434: INFO: Pod "pod-secrets-b7faa15e-7667-4d7e-9ebc-b48e368bc519": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093934588s
Jan  6 13:44:40.444: INFO: Pod "pod-secrets-b7faa15e-7667-4d7e-9ebc-b48e368bc519": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103838536s
Jan  6 13:44:42.456: INFO: Pod "pod-secrets-b7faa15e-7667-4d7e-9ebc-b48e368bc519": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.115835908s
STEP: Saw pod success
Jan  6 13:44:42.456: INFO: Pod "pod-secrets-b7faa15e-7667-4d7e-9ebc-b48e368bc519" satisfied condition "success or failure"
Jan  6 13:44:42.461: INFO: Trying to get logs from node iruya-node pod pod-secrets-b7faa15e-7667-4d7e-9ebc-b48e368bc519 container secret-volume-test: 
STEP: delete the pod
Jan  6 13:44:42.669: INFO: Waiting for pod pod-secrets-b7faa15e-7667-4d7e-9ebc-b48e368bc519 to disappear
Jan  6 13:44:42.682: INFO: Pod pod-secrets-b7faa15e-7667-4d7e-9ebc-b48e368bc519 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:44:42.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8917" for this suite.
Jan  6 13:44:48.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:44:48.887: INFO: namespace secrets-8917 deletion completed in 6.193911033s
STEP: Destroying namespace "secret-namespace-2324" for this suite.
Jan  6 13:44:54.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:44:55.034: INFO: namespace secret-namespace-2324 deletion completed in 6.146915206s

• [SLOW TEST:20.845 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:44:55.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-7f8c00ac-413b-4a8f-a214-76fe2ea70414
STEP: Creating a pod to test consume secrets
Jan  6 13:44:55.233: INFO: Waiting up to 5m0s for pod "pod-secrets-08fac6f0-5575-4b0e-826d-5ef5f7428a44" in namespace "secrets-2467" to be "success or failure"
Jan  6 13:44:55.262: INFO: Pod "pod-secrets-08fac6f0-5575-4b0e-826d-5ef5f7428a44": Phase="Pending", Reason="", readiness=false. Elapsed: 29.149404ms
Jan  6 13:44:57.275: INFO: Pod "pod-secrets-08fac6f0-5575-4b0e-826d-5ef5f7428a44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041417352s
Jan  6 13:44:59.295: INFO: Pod "pod-secrets-08fac6f0-5575-4b0e-826d-5ef5f7428a44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061374044s
Jan  6 13:45:01.306: INFO: Pod "pod-secrets-08fac6f0-5575-4b0e-826d-5ef5f7428a44": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073033497s
Jan  6 13:45:03.317: INFO: Pod "pod-secrets-08fac6f0-5575-4b0e-826d-5ef5f7428a44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08378688s
STEP: Saw pod success
Jan  6 13:45:03.317: INFO: Pod "pod-secrets-08fac6f0-5575-4b0e-826d-5ef5f7428a44" satisfied condition "success or failure"
Jan  6 13:45:03.319: INFO: Trying to get logs from node iruya-node pod pod-secrets-08fac6f0-5575-4b0e-826d-5ef5f7428a44 container secret-volume-test: 
STEP: delete the pod
Jan  6 13:45:03.391: INFO: Waiting for pod pod-secrets-08fac6f0-5575-4b0e-826d-5ef5f7428a44 to disappear
Jan  6 13:45:03.414: INFO: Pod pod-secrets-08fac6f0-5575-4b0e-826d-5ef5f7428a44 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:45:03.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2467" for this suite.
Jan  6 13:45:09.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:45:09.753: INFO: namespace secrets-2467 deletion completed in 6.332596384s

• [SLOW TEST:14.718 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:45:09.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating api versions
Jan  6 13:45:09.904: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan  6 13:45:10.163: INFO: stderr: ""
Jan  6 13:45:10.163: INFO: stdout: "admissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\napps/v1beta1\napps/v1beta2\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:45:10.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8797" for this suite.
Jan  6 13:45:16.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:45:16.422: INFO: namespace kubectl-8797 deletion completed in 6.242348627s

• [SLOW TEST:6.669 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:45:16.423: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  6 13:45:16.650: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"bf2e33e7-5bfa-4668-8c14-5b96f0b94f0d", Controller:(*bool)(0xc002523572), BlockOwnerDeletion:(*bool)(0xc002523573)}}
Jan  6 13:45:16.676: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"50c35fcd-8be8-4676-a662-4c4b7d97276c", Controller:(*bool)(0xc00252372a), BlockOwnerDeletion:(*bool)(0xc00252372b)}}
Jan  6 13:45:16.686: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"6c75ff51-b2ed-4675-ae76-beefd1805c10", Controller:(*bool)(0xc00281e78a), BlockOwnerDeletion:(*bool)(0xc00281e78b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:45:21.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3916" for this suite.
Jan  6 13:45:27.840: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:45:27.999: INFO: namespace gc-3916 deletion completed in 6.22105828s

• [SLOW TEST:11.575 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:45:27.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override command
Jan  6 13:45:28.133: INFO: Waiting up to 5m0s for pod "client-containers-a640021c-e20d-4556-b217-e764e285fcfa" in namespace "containers-2955" to be "success or failure"
Jan  6 13:45:28.155: INFO: Pod "client-containers-a640021c-e20d-4556-b217-e764e285fcfa": Phase="Pending", Reason="", readiness=false. Elapsed: 21.795327ms
Jan  6 13:45:30.167: INFO: Pod "client-containers-a640021c-e20d-4556-b217-e764e285fcfa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03348211s
Jan  6 13:45:32.178: INFO: Pod "client-containers-a640021c-e20d-4556-b217-e764e285fcfa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044717168s
Jan  6 13:45:34.193: INFO: Pod "client-containers-a640021c-e20d-4556-b217-e764e285fcfa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059143652s
Jan  6 13:45:36.207: INFO: Pod "client-containers-a640021c-e20d-4556-b217-e764e285fcfa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073918727s
STEP: Saw pod success
Jan  6 13:45:36.208: INFO: Pod "client-containers-a640021c-e20d-4556-b217-e764e285fcfa" satisfied condition "success or failure"
Jan  6 13:45:36.236: INFO: Trying to get logs from node iruya-node pod client-containers-a640021c-e20d-4556-b217-e764e285fcfa container test-container: 
STEP: delete the pod
Jan  6 13:45:36.290: INFO: Waiting for pod client-containers-a640021c-e20d-4556-b217-e764e285fcfa to disappear
Jan  6 13:45:36.301: INFO: Pod client-containers-a640021c-e20d-4556-b217-e764e285fcfa no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:45:36.301: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2955" for this suite.
Jan  6 13:45:42.337: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:45:42.441: INFO: namespace containers-2955 deletion completed in 6.134619536s

• [SLOW TEST:14.442 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:45:42.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-zqbh
STEP: Creating a pod to test atomic-volume-subpath
Jan  6 13:45:42.670: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zqbh" in namespace "subpath-6852" to be "success or failure"
Jan  6 13:45:42.687: INFO: Pod "pod-subpath-test-configmap-zqbh": Phase="Pending", Reason="", readiness=false. Elapsed: 16.530077ms
Jan  6 13:45:44.695: INFO: Pod "pod-subpath-test-configmap-zqbh": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02465312s
Jan  6 13:45:46.743: INFO: Pod "pod-subpath-test-configmap-zqbh": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072658824s
Jan  6 13:45:48.750: INFO: Pod "pod-subpath-test-configmap-zqbh": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079952251s
Jan  6 13:45:50.757: INFO: Pod "pod-subpath-test-configmap-zqbh": Phase="Running", Reason="", readiness=true. Elapsed: 8.087024721s
Jan  6 13:45:52.771: INFO: Pod "pod-subpath-test-configmap-zqbh": Phase="Running", Reason="", readiness=true. Elapsed: 10.100853821s
Jan  6 13:45:54.779: INFO: Pod "pod-subpath-test-configmap-zqbh": Phase="Running", Reason="", readiness=true. Elapsed: 12.108789224s
Jan  6 13:45:56.790: INFO: Pod "pod-subpath-test-configmap-zqbh": Phase="Running", Reason="", readiness=true. Elapsed: 14.120078319s
Jan  6 13:45:58.807: INFO: Pod "pod-subpath-test-configmap-zqbh": Phase="Running", Reason="", readiness=true. Elapsed: 16.136727749s
Jan  6 13:46:00.818: INFO: Pod "pod-subpath-test-configmap-zqbh": Phase="Running", Reason="", readiness=true. Elapsed: 18.147592629s
Jan  6 13:46:02.830: INFO: Pod "pod-subpath-test-configmap-zqbh": Phase="Running", Reason="", readiness=true. Elapsed: 20.159507907s
Jan  6 13:46:04.867: INFO: Pod "pod-subpath-test-configmap-zqbh": Phase="Running", Reason="", readiness=true. Elapsed: 22.196715888s
Jan  6 13:46:06.878: INFO: Pod "pod-subpath-test-configmap-zqbh": Phase="Running", Reason="", readiness=true. Elapsed: 24.207478816s
Jan  6 13:46:08.889: INFO: Pod "pod-subpath-test-configmap-zqbh": Phase="Running", Reason="", readiness=true. Elapsed: 26.218579576s
Jan  6 13:46:10.901: INFO: Pod "pod-subpath-test-configmap-zqbh": Phase="Running", Reason="", readiness=true. Elapsed: 28.230535773s
Jan  6 13:46:12.927: INFO: Pod "pod-subpath-test-configmap-zqbh": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.256784226s
STEP: Saw pod success
Jan  6 13:46:12.927: INFO: Pod "pod-subpath-test-configmap-zqbh" satisfied condition "success or failure"
Jan  6 13:46:12.942: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-zqbh container test-container-subpath-configmap-zqbh: 
STEP: delete the pod
Jan  6 13:46:13.113: INFO: Waiting for pod pod-subpath-test-configmap-zqbh to disappear
Jan  6 13:46:13.120: INFO: Pod pod-subpath-test-configmap-zqbh no longer exists
STEP: Deleting pod pod-subpath-test-configmap-zqbh
Jan  6 13:46:13.120: INFO: Deleting pod "pod-subpath-test-configmap-zqbh" in namespace "subpath-6852"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:46:13.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6852" for this suite.
Jan  6 13:46:19.188: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:46:19.351: INFO: namespace subpath-6852 deletion completed in 6.219088838s

• [SLOW TEST:36.909 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:46:19.352: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  6 13:46:30.812: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:46:30.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-740" for this suite.
Jan  6 13:46:36.956: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:46:37.059: INFO: namespace container-runtime-740 deletion completed in 6.150098211s

• [SLOW TEST:17.707 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:46:37.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3443.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3443.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3443.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3443.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3443.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3443.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3443.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3443.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3443.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3443.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3443.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3443.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3443.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 202.130.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.130.202_udp@PTR;check="$$(dig +tcp +noall +answer +search 202.130.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.130.202_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3443.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3443.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3443.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3443.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3443.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3443.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3443.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3443.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3443.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3443.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3443.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3443.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3443.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 202.130.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.130.202_udp@PTR;check="$$(dig +tcp +noall +answer +search 202.130.109.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.109.130.202_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  6 13:46:51.427: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3443.svc.cluster.local from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.437: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3443.svc.cluster.local from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.451: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3443.svc.cluster.local from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.457: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-3443.svc.cluster.local from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.463: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-3443.svc.cluster.local from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.473: INFO: Unable to read wheezy_udp@PodARecord from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.478: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.483: INFO: Unable to read 10.109.130.202_udp@PTR from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.487: INFO: Unable to read 10.109.130.202_tcp@PTR from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.491: INFO: Unable to read jessie_udp@dns-test-service.dns-3443.svc.cluster.local from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.495: INFO: Unable to read jessie_tcp@dns-test-service.dns-3443.svc.cluster.local from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.510: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3443.svc.cluster.local from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.515: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3443.svc.cluster.local from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.519: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-3443.svc.cluster.local from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.524: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-3443.svc.cluster.local from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.530: INFO: Unable to read jessie_udp@PodARecord from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.534: INFO: Unable to read jessie_tcp@PodARecord from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.539: INFO: Unable to read 10.109.130.202_udp@PTR from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.547: INFO: Unable to read 10.109.130.202_tcp@PTR from pod dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc: the server could not find the requested resource (get pods dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc)
Jan  6 13:46:51.547: INFO: Lookups using dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc failed for: [wheezy_tcp@dns-test-service.dns-3443.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3443.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3443.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-3443.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-3443.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.109.130.202_udp@PTR 10.109.130.202_tcp@PTR jessie_udp@dns-test-service.dns-3443.svc.cluster.local jessie_tcp@dns-test-service.dns-3443.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3443.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3443.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-3443.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-3443.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.109.130.202_udp@PTR 10.109.130.202_tcp@PTR]

Jan  6 13:46:56.789: INFO: DNS probes using dns-3443/dns-test-8248b52f-5d9d-4602-98c1-d6e757ec73bc succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:46:57.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3443" for this suite.
Jan  6 13:47:03.295: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:47:03.513: INFO: namespace dns-3443 deletion completed in 6.321139429s

• [SLOW TEST:26.453 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:47:03.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1721
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  6 13:47:03.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-pod --generator=run-pod/v1 --image=docker.io/library/nginx:1.14-alpine --labels=run=e2e-test-nginx-pod --namespace=kubectl-1284'
Jan  6 13:47:05.838: INFO: stderr: ""
Jan  6 13:47:05.838: INFO: stdout: "pod/e2e-test-nginx-pod created\n"
STEP: verifying the pod e2e-test-nginx-pod is running
STEP: verifying the pod e2e-test-nginx-pod was created
Jan  6 13:47:15.890: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-nginx-pod --namespace=kubectl-1284 -o json'
Jan  6 13:47:16.066: INFO: stderr: ""
Jan  6 13:47:16.066: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-06T13:47:05Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-nginx-pod\"\n        },\n        \"name\": \"e2e-test-nginx-pod\",\n        \"namespace\": \"kubectl-1284\",\n        \"resourceVersion\": \"19526425\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-1284/pods/e2e-test-nginx-pod\",\n        \"uid\": \"ccb57adc-82b0-4bc4-b806-dab2439c586e\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/nginx:1.14-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-nginx-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-lk55g\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"iruya-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-lk55g\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-lk55g\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-06T13:47:05Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-06T13:47:13Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-06T13:47:13Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-06T13:47:05Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://87137a58dcf8a707811718f92727cec445071c75742d0293c1b96bd2cece8f8f\",\n                \"image\": \"nginx:1.14-alpine\",\n                \"imageID\": \"docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-nginx-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-06T13:47:12Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.3.65\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-06T13:47:05Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan  6 13:47:16.067: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1284'
Jan  6 13:47:16.567: INFO: stderr: ""
Jan  6 13:47:16.567: INFO: stdout: "pod/e2e-test-nginx-pod replaced\n"
STEP: verifying the pod e2e-test-nginx-pod has the right image docker.io/library/busybox:1.29
[AfterEach] [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1726
Jan  6 13:47:16.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-nginx-pod --namespace=kubectl-1284'
Jan  6 13:47:23.545: INFO: stderr: ""
Jan  6 13:47:23.546: INFO: stdout: "pod \"e2e-test-nginx-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:47:23.546: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1284" for this suite.
Jan  6 13:47:29.617: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:47:29.742: INFO: namespace kubectl-1284 deletion completed in 6.157149575s

• [SLOW TEST:26.229 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:47:29.742: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0106 13:47:40.749695       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  6 13:47:40.749: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:47:40.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7308" for this suite.
Jan  6 13:47:47.007: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:47:47.170: INFO: namespace gc-7308 deletion completed in 6.412776797s

• [SLOW TEST:17.428 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:47:47.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan  6 13:47:47.305: INFO: Waiting up to 5m0s for pod "pod-15d165b5-51e3-409a-b168-b5b8a95ca5bf" in namespace "emptydir-3329" to be "success or failure"
Jan  6 13:47:47.315: INFO: Pod "pod-15d165b5-51e3-409a-b168-b5b8a95ca5bf": Phase="Pending", Reason="", readiness=false. Elapsed: 10.667858ms
Jan  6 13:47:49.327: INFO: Pod "pod-15d165b5-51e3-409a-b168-b5b8a95ca5bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022638554s
Jan  6 13:47:51.339: INFO: Pod "pod-15d165b5-51e3-409a-b168-b5b8a95ca5bf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034502901s
Jan  6 13:47:53.346: INFO: Pod "pod-15d165b5-51e3-409a-b168-b5b8a95ca5bf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041378908s
Jan  6 13:47:55.352: INFO: Pod "pod-15d165b5-51e3-409a-b168-b5b8a95ca5bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047381205s
STEP: Saw pod success
Jan  6 13:47:55.352: INFO: Pod "pod-15d165b5-51e3-409a-b168-b5b8a95ca5bf" satisfied condition "success or failure"
Jan  6 13:47:55.356: INFO: Trying to get logs from node iruya-node pod pod-15d165b5-51e3-409a-b168-b5b8a95ca5bf container test-container: 
STEP: delete the pod
Jan  6 13:47:55.481: INFO: Waiting for pod pod-15d165b5-51e3-409a-b168-b5b8a95ca5bf to disappear
Jan  6 13:47:55.526: INFO: Pod pod-15d165b5-51e3-409a-b168-b5b8a95ca5bf no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:47:55.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3329" for this suite.
Jan  6 13:48:01.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:48:01.712: INFO: namespace emptydir-3329 deletion completed in 6.179109644s

• [SLOW TEST:14.541 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:48:01.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  6 13:48:01.811: INFO: Waiting up to 5m0s for pod "downwardapi-volume-387c73cc-a59b-4c04-beba-5cc71b1be982" in namespace "downward-api-1536" to be "success or failure"
Jan  6 13:48:01.830: INFO: Pod "downwardapi-volume-387c73cc-a59b-4c04-beba-5cc71b1be982": Phase="Pending", Reason="", readiness=false. Elapsed: 18.34357ms
Jan  6 13:48:03.842: INFO: Pod "downwardapi-volume-387c73cc-a59b-4c04-beba-5cc71b1be982": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030338363s
Jan  6 13:48:05.871: INFO: Pod "downwardapi-volume-387c73cc-a59b-4c04-beba-5cc71b1be982": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0599089s
Jan  6 13:48:07.888: INFO: Pod "downwardapi-volume-387c73cc-a59b-4c04-beba-5cc71b1be982": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076804029s
Jan  6 13:48:09.899: INFO: Pod "downwardapi-volume-387c73cc-a59b-4c04-beba-5cc71b1be982": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.087666799s
STEP: Saw pod success
Jan  6 13:48:09.899: INFO: Pod "downwardapi-volume-387c73cc-a59b-4c04-beba-5cc71b1be982" satisfied condition "success or failure"
Jan  6 13:48:09.906: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-387c73cc-a59b-4c04-beba-5cc71b1be982 container client-container: 
STEP: delete the pod
Jan  6 13:48:10.098: INFO: Waiting for pod downwardapi-volume-387c73cc-a59b-4c04-beba-5cc71b1be982 to disappear
Jan  6 13:48:10.116: INFO: Pod downwardapi-volume-387c73cc-a59b-4c04-beba-5cc71b1be982 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:48:10.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1536" for this suite.
Jan  6 13:48:16.143: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:48:16.431: INFO: namespace downward-api-1536 deletion completed in 6.308554472s

• [SLOW TEST:14.718 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:48:16.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Jan  6 13:48:24.668: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan  6 13:48:39.825: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:48:39.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3143" for this suite.
Jan  6 13:48:45.918: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:48:46.112: INFO: namespace pods-3143 deletion completed in 6.27635396s

• [SLOW TEST:29.681 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:48:46.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  6 13:48:46.209: INFO: Waiting up to 5m0s for pod "downward-api-c1c5d75e-5ca3-4f38-818b-a575d26877e2" in namespace "downward-api-461" to be "success or failure"
Jan  6 13:48:46.231: INFO: Pod "downward-api-c1c5d75e-5ca3-4f38-818b-a575d26877e2": Phase="Pending", Reason="", readiness=false. Elapsed: 21.600929ms
Jan  6 13:48:48.242: INFO: Pod "downward-api-c1c5d75e-5ca3-4f38-818b-a575d26877e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032693021s
Jan  6 13:48:50.251: INFO: Pod "downward-api-c1c5d75e-5ca3-4f38-818b-a575d26877e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04136076s
Jan  6 13:48:52.266: INFO: Pod "downward-api-c1c5d75e-5ca3-4f38-818b-a575d26877e2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056779442s
Jan  6 13:48:54.280: INFO: Pod "downward-api-c1c5d75e-5ca3-4f38-818b-a575d26877e2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070834305s
Jan  6 13:48:56.294: INFO: Pod "downward-api-c1c5d75e-5ca3-4f38-818b-a575d26877e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084050031s
STEP: Saw pod success
Jan  6 13:48:56.294: INFO: Pod "downward-api-c1c5d75e-5ca3-4f38-818b-a575d26877e2" satisfied condition "success or failure"
Jan  6 13:48:56.300: INFO: Trying to get logs from node iruya-node pod downward-api-c1c5d75e-5ca3-4f38-818b-a575d26877e2 container dapi-container: 
STEP: delete the pod
Jan  6 13:48:56.371: INFO: Waiting for pod downward-api-c1c5d75e-5ca3-4f38-818b-a575d26877e2 to disappear
Jan  6 13:48:56.381: INFO: Pod downward-api-c1c5d75e-5ca3-4f38-818b-a575d26877e2 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:48:56.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-461" for this suite.
Jan  6 13:49:02.477: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:49:02.621: INFO: namespace downward-api-461 deletion completed in 6.231165168s

• [SLOW TEST:16.508 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:49:02.621: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  6 13:49:02.746: INFO: Waiting up to 5m0s for pod "downwardapi-volume-823c9dae-e2fe-46e4-8b91-d6ce433efad6" in namespace "projected-2586" to be "success or failure"
Jan  6 13:49:02.757: INFO: Pod "downwardapi-volume-823c9dae-e2fe-46e4-8b91-d6ce433efad6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.719507ms
Jan  6 13:49:04.765: INFO: Pod "downwardapi-volume-823c9dae-e2fe-46e4-8b91-d6ce433efad6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018703278s
Jan  6 13:49:06.770: INFO: Pod "downwardapi-volume-823c9dae-e2fe-46e4-8b91-d6ce433efad6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023304402s
Jan  6 13:49:08.777: INFO: Pod "downwardapi-volume-823c9dae-e2fe-46e4-8b91-d6ce433efad6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030395266s
Jan  6 13:49:10.785: INFO: Pod "downwardapi-volume-823c9dae-e2fe-46e4-8b91-d6ce433efad6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.038835376s
STEP: Saw pod success
Jan  6 13:49:10.785: INFO: Pod "downwardapi-volume-823c9dae-e2fe-46e4-8b91-d6ce433efad6" satisfied condition "success or failure"
Jan  6 13:49:10.790: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-823c9dae-e2fe-46e4-8b91-d6ce433efad6 container client-container: 
STEP: delete the pod
Jan  6 13:49:10.860: INFO: Waiting for pod downwardapi-volume-823c9dae-e2fe-46e4-8b91-d6ce433efad6 to disappear
Jan  6 13:49:10.871: INFO: Pod downwardapi-volume-823c9dae-e2fe-46e4-8b91-d6ce433efad6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:49:10.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2586" for this suite.
Jan  6 13:49:16.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:49:17.028: INFO: namespace projected-2586 deletion completed in 6.152143286s

• [SLOW TEST:14.407 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:49:17.029: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  6 13:49:17.124: INFO: Creating deployment "nginx-deployment"
Jan  6 13:49:17.136: INFO: Waiting for observed generation 1
Jan  6 13:49:20.168: INFO: Waiting for all required pods to come up
Jan  6 13:49:20.188: INFO: Pod name nginx: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan  6 13:49:42.420: INFO: Waiting for deployment "nginx-deployment" to complete
Jan  6 13:49:42.431: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:10, UpdatedReplicas:10, ReadyReplicas:9, AvailableReplicas:9, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713915382, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713915382, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713915382, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713915357, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"nginx-deployment-7b8c6f4498\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  6 13:49:44.448: INFO: Updating deployment "nginx-deployment" with a non-existent image
Jan  6 13:49:44.463: INFO: Updating deployment nginx-deployment
Jan  6 13:49:44.463: INFO: Waiting for observed generation 2
Jan  6 13:49:48.536: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan  6 13:49:49.139: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan  6 13:49:50.051: INFO: Waiting for the first rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  6 13:49:50.068: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan  6 13:49:50.068: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan  6 13:49:50.072: INFO: Waiting for the second rollout's replicaset of deployment "nginx-deployment" to have desired number of replicas
Jan  6 13:49:50.078: INFO: Verifying that deployment "nginx-deployment" has minimum required number of available replicas
Jan  6 13:49:50.078: INFO: Scaling up the deployment "nginx-deployment" from 10 to 30
Jan  6 13:49:50.094: INFO: Updating deployment nginx-deployment
Jan  6 13:49:50.094: INFO: Waiting for the replicasets of deployment "nginx-deployment" to have desired number of replicas
Jan  6 13:49:50.627: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan  6 13:49:50.743: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  6 13:49:54.431: INFO: Deployment "nginx-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment,GenerateName:,Namespace:deployment-8220,SelfLink:/apis/apps/v1/namespaces/deployment-8220/deployments/nginx-deployment,UID:568830f4-713b-479c-8cc6-87ab9c842b28,ResourceVersion:19526992,Generation:3,CreationTimestamp:2020-01-06 13:49:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{deployment.kubernetes.io/revision: 2,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*30,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[{Progressing True 2020-01-06 13:49:49 +0000 UTC 2020-01-06 13:49:17 +0000 UTC ReplicaSetUpdated ReplicaSet "nginx-deployment-55fb7cb77f" is progressing.} {Available False 2020-01-06 13:49:50 +0000 UTC 2020-01-06 13:49:50 +0000 UTC MinimumReplicasUnavailable Deployment does not have minimum availability.}],ReadyReplicas:8,CollisionCount:nil,},}

Jan  6 13:49:54.455: INFO: New ReplicaSet "nginx-deployment-55fb7cb77f" of Deployment "nginx-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f,GenerateName:,Namespace:deployment-8220,SelfLink:/apis/apps/v1/namespaces/deployment-8220/replicasets/nginx-deployment-55fb7cb77f,UID:e172bc71-5c76-49c7-bc4b-9b48c7834b3f,ResourceVersion:19527033,Generation:3,CreationTimestamp:2020-01-06 13:49:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 2,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 568830f4-713b-479c-8cc6-87ab9c842b28 0xc0006d8c27 0xc0006d8c28}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*13,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:5,FullyLabeledReplicas:5,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  6 13:49:54.455: INFO: All old ReplicaSets of Deployment "nginx-deployment":
Jan  6 13:49:54.455: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498,GenerateName:,Namespace:deployment-8220,SelfLink:/apis/apps/v1/namespaces/deployment-8220/replicasets/nginx-deployment-7b8c6f4498,UID:fdd9ab75-7ead-4734-968a-1328e475e89d,ResourceVersion:19527032,Generation:3,CreationTimestamp:2020-01-06 13:49:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 30,deployment.kubernetes.io/max-replicas: 33,deployment.kubernetes.io/revision: 1,},OwnerReferences:[{apps/v1 Deployment nginx-deployment 568830f4-713b-479c-8cc6-87ab9c842b28 0xc0006d8d87 0xc0006d8d88}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*20,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[],},}
Jan  6 13:49:55.693: INFO: Pod "nginx-deployment-55fb7cb77f-5k5pd" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-5k5pd,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-55fb7cb77f-5k5pd,UID:a9f9a052-f67b-4ad3-971c-f22e01cd07e9,ResourceVersion:19526950,Generation:0,CreationTimestamp:2020-01-06 13:49:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e172bc71-5c76-49c7-bc4b-9b48c7834b3f 0xc000b25f37 0xc000b25f38}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc000b25fc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc000b25fe0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-06 13:49:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.694: INFO: Pod "nginx-deployment-55fb7cb77f-b7qrt" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-b7qrt,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-55fb7cb77f-b7qrt,UID:268fd2c4-6231-49a9-8670-b67538611606,ResourceVersion:19526978,Generation:0,CreationTimestamp:2020-01-06 13:49:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e172bc71-5c76-49c7-bc4b-9b48c7834b3f 0xc0029720b7 0xc0029720b8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002972130} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002972150}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-06 13:49:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.694: INFO: Pod "nginx-deployment-55fb7cb77f-hdjrp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hdjrp,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-55fb7cb77f-hdjrp,UID:7f983d4a-e8d9-43ed-bd0f-9594017e4332,ResourceVersion:19527036,Generation:0,CreationTimestamp:2020-01-06 13:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e172bc71-5c76-49c7-bc4b-9b48c7834b3f 0xc002972227 0xc002972228}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002972290} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029722b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.695: INFO: Pod "nginx-deployment-55fb7cb77f-hh7sk" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hh7sk,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-55fb7cb77f-hh7sk,UID:82379906-2a6b-45d0-9e1d-f36c957b0562,ResourceVersion:19526947,Generation:0,CreationTimestamp:2020-01-06 13:49:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e172bc71-5c76-49c7-bc4b-9b48c7834b3f 0xc002972337 0xc002972338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029723a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029723c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-06 13:49:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.695: INFO: Pod "nginx-deployment-55fb7cb77f-hj896" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hj896,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-55fb7cb77f-hj896,UID:a6d3fe8b-1bfc-440f-861e-eb35676304b3,ResourceVersion:19526960,Generation:0,CreationTimestamp:2020-01-06 13:49:44 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e172bc71-5c76-49c7-bc4b-9b48c7834b3f 0xc002972497 0xc002972498}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002972510} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002972530}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:44 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:44 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:44 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-06 13:49:44 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.696: INFO: Pod "nginx-deployment-55fb7cb77f-hq9pn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-hq9pn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-55fb7cb77f-hq9pn,UID:8afe3c67-a89b-4e6a-ad83-89961c9d2e3f,ResourceVersion:19527023,Generation:0,CreationTimestamp:2020-01-06 13:49:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e172bc71-5c76-49c7-bc4b-9b48c7834b3f 0xc002972607 0xc002972608}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002972680} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029726a0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.697: INFO: Pod "nginx-deployment-55fb7cb77f-m7gjj" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-m7gjj,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-55fb7cb77f-m7gjj,UID:596e1ab5-6f7b-48fd-b583-4cec2702cbf4,ResourceVersion:19527025,Generation:0,CreationTimestamp:2020-01-06 13:49:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e172bc71-5c76-49c7-bc4b-9b48c7834b3f 0xc002972727 0xc002972728}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029727a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029727c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.697: INFO: Pod "nginx-deployment-55fb7cb77f-r9jx9" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-r9jx9,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-55fb7cb77f-r9jx9,UID:b0aaa2d9-d397-4b02-9e3b-7f159e5a07f5,ResourceVersion:19527040,Generation:0,CreationTimestamp:2020-01-06 13:49:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e172bc71-5c76-49c7-bc4b-9b48c7834b3f 0xc002972847 0xc002972848}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029728b0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029728d0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-06 13:49:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.697: INFO: Pod "nginx-deployment-55fb7cb77f-svqkw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-svqkw,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-55fb7cb77f-svqkw,UID:b41ed052-cf3d-48c4-a10f-42188fa6fa2d,ResourceVersion:19527024,Generation:0,CreationTimestamp:2020-01-06 13:49:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e172bc71-5c76-49c7-bc4b-9b48c7834b3f 0xc0029729a7 0xc0029729a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002972a10} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002972a30}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.698: INFO: Pod "nginx-deployment-55fb7cb77f-t66pl" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-t66pl,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-55fb7cb77f-t66pl,UID:be6b31dd-afec-4300-8c75-c6040d690be7,ResourceVersion:19527014,Generation:0,CreationTimestamp:2020-01-06 13:49:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e172bc71-5c76-49c7-bc4b-9b48c7834b3f 0xc002972ab7 0xc002972ab8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002972b30} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002972b50}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.698: INFO: Pod "nginx-deployment-55fb7cb77f-tt9kn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-tt9kn,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-55fb7cb77f-tt9kn,UID:c6e61a0f-fc00-42e9-96ca-d851ebe91a82,ResourceVersion:19527007,Generation:0,CreationTimestamp:2020-01-06 13:49:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e172bc71-5c76-49c7-bc4b-9b48c7834b3f 0xc002972bd7 0xc002972bd8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002972c40} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002972c60}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.698: INFO: Pod "nginx-deployment-55fb7cb77f-vs2b7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-vs2b7,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-55fb7cb77f-vs2b7,UID:cd4d0295-4689-497c-82c7-0f96f4f9a5ac,ResourceVersion:19526979,Generation:0,CreationTimestamp:2020-01-06 13:49:45 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e172bc71-5c76-49c7-bc4b-9b48c7834b3f 0xc002972ce7 0xc002972ce8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002972d60} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002972d80}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:47 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:47 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:45 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-06 13:49:47 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.699: INFO: Pod "nginx-deployment-55fb7cb77f-xqxnr" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-55fb7cb77f-xqxnr,GenerateName:nginx-deployment-55fb7cb77f-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-55fb7cb77f-xqxnr,UID:bb3ee261-7430-414b-8501-858d8f6475ad,ResourceVersion:19527047,Generation:0,CreationTimestamp:2020-01-06 13:49:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 55fb7cb77f,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-55fb7cb77f e172bc71-5c76-49c7-bc4b-9b48c7834b3f 0xc002972e57 0xc002972e58}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx nginx:404 [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002972ed0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002972ef0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-06 13:49:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 nginx:404  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.699: INFO: Pod "nginx-deployment-7b8c6f4498-22rqc" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-22rqc,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-22rqc,UID:eee92030-834c-454e-8620-ae9c64211a7f,ResourceVersion:19527034,Generation:0,CreationTimestamp:2020-01-06 13:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc002972fc7 0xc002972fc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002973040} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002973060}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.700: INFO: Pod "nginx-deployment-7b8c6f4498-24s84" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-24s84,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-24s84,UID:23284bb6-4942-4bb3-b5e6-76c907fc85c3,ResourceVersion:19527035,Generation:0,CreationTimestamp:2020-01-06 13:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc0029730e7 0xc0029730e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002973150} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002973170}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.700: INFO: Pod "nginx-deployment-7b8c6f4498-4jbp5" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-4jbp5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-4jbp5,UID:4c638ebd-15dd-4787-866e-221d277c4306,ResourceVersion:19527026,Generation:0,CreationTimestamp:2020-01-06 13:49:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc0029731f7 0xc0029731f8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002973270} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002973290}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.700: INFO: Pod "nginx-deployment-7b8c6f4498-78g2m" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-78g2m,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-78g2m,UID:250838fa-f58d-424e-8f46-c482a4bec7c9,ResourceVersion:19527037,Generation:0,CreationTimestamp:2020-01-06 13:49:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc002973317 0xc002973318}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002973390} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029733b0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:,StartTime:2020-01-06 13:49:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.701: INFO: Pod "nginx-deployment-7b8c6f4498-8sn8q" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-8sn8q,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-8sn8q,UID:02b0f31b-cb67-44ce-a8bc-f65bcf9250cc,ResourceVersion:19527015,Generation:0,CreationTimestamp:2020-01-06 13:49:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc002973487 0xc002973488}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002973500} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002973520}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.701: INFO: Pod "nginx-deployment-7b8c6f4498-9dbsn" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9dbsn,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-9dbsn,UID:1138341e-dcf3-4723-a06a-c1e8a493c0a5,ResourceVersion:19527022,Generation:0,CreationTimestamp:2020-01-06 13:49:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc0029735a7 0xc0029735a8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002973620} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002973640}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.702: INFO: Pod "nginx-deployment-7b8c6f4498-9pq7c" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-9pq7c,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-9pq7c,UID:6fc463ab-0edd-433b-87d2-316bd239f610,ResourceVersion:19526917,Generation:0,CreationTimestamp:2020-01-06 13:49:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc0029736c7 0xc0029736c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002973740} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002973760}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:42 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.4,StartTime:2020-01-06 13:49:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-06 13:49:41 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://38f0391b9458a2f451a0e2ecf0c31b9d754c33152fd9698c49ed4f3cd067c62e}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.702: INFO: Pod "nginx-deployment-7b8c6f4498-cb48w" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-cb48w,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-cb48w,UID:0d45d79f-1339-4198-a0dd-f2bfbd469e97,ResourceVersion:19526885,Generation:0,CreationTimestamp:2020-01-06 13:49:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc002973837 0xc002973838}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0029738a0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0029738c0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.4,StartTime:2020-01-06 13:49:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-06 13:49:37 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://934f6b6027005790b652c41169d7f89cf2ac2d61435e39b4b1b0de1b2fb8ec86}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.702: INFO: Pod "nginx-deployment-7b8c6f4498-gqzcp" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-gqzcp,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-gqzcp,UID:006f5177-c732-4d5b-8075-ba9252d358ad,ResourceVersion:19527048,Generation:0,CreationTimestamp:2020-01-06 13:49:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc002973997 0xc002973998}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002973a00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002973a20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC ContainersNotReady containers with unready status: [nginx]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:50 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:,StartTime:2020-01-06 13:49:51 +0000 UTC,ContainerStatuses:[{nginx {ContainerStateWaiting{Reason:ContainerCreating,Message:,} nil nil} {nil nil nil} false 0 docker.io/library/nginx:1.14-alpine  }],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.703: INFO: Pod "nginx-deployment-7b8c6f4498-l59p5" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-l59p5,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-l59p5,UID:85044f06-84f1-48c5-a25f-7139a5400b5e,ResourceVersion:19526888,Generation:0,CreationTimestamp:2020-01-06 13:49:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc002973ae7 0xc002973ae8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002973b50} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002973b70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.5,StartTime:2020-01-06 13:49:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-06 13:49:37 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://43957f4d50217f8dde1241c77e2562bb881851f529c132417922334a2d6c50c0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.703: INFO: Pod "nginx-deployment-7b8c6f4498-mnpg7" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-mnpg7,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-mnpg7,UID:7a465c3e-30e1-41be-a4fb-884f03a53352,ResourceVersion:19527031,Generation:0,CreationTimestamp:2020-01-06 13:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc002973c47 0xc002973c48}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002973cc0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002973ce0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.703: INFO: Pod "nginx-deployment-7b8c6f4498-pf7lr" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-pf7lr,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-pf7lr,UID:f1cf15a5-569c-4fab-80bc-716055cbe34a,ResourceVersion:19526914,Generation:0,CreationTimestamp:2020-01-06 13:49:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc002973d67 0xc002973d68}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002973e00} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002973e20}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:42 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.3,StartTime:2020-01-06 13:49:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-06 13:49:41 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://863f779e0e9f6ef689ed3bf4788f28fc341ee043c4310f67f7a915fe7288ed18}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.704: INFO: Pod "nginx-deployment-7b8c6f4498-qvslg" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-qvslg,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-qvslg,UID:79fcc9e0-6c25-4101-bf04-fc7f1f9b848a,ResourceVersion:19526868,Generation:0,CreationTimestamp:2020-01-06 13:49:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc002973ef7 0xc002973ef8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc002973f70} {node.kubernetes.io/unreachable Exists  NoExecute 0xc002973f90}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:36 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:36 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-06 13:49:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-06 13:49:34 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://ba82361fc190e4c57643c5e6ff3977b6fe64a8b0f9e61ab9bd43073f4405fcf0}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.704: INFO: Pod "nginx-deployment-7b8c6f4498-r5xz8" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-r5xz8,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-r5xz8,UID:7bd9b8ab-bf50-4abd-b37c-506cbb99ba72,ResourceVersion:19526890,Generation:0,CreationTimestamp:2020-01-06 13:49:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc0021da067 0xc0021da068}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021da0e0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021da100}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:39 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:39 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.7,StartTime:2020-01-06 13:49:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-06 13:49:37 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://020a2f01f8ca256e40f385d142456f77818d6d016b3e2be4fb2c73570932ccc7}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.704: INFO: Pod "nginx-deployment-7b8c6f4498-rxfgw" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-rxfgw,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-rxfgw,UID:8435df30-40f4-4981-843b-e6107e3e3520,ResourceVersion:19527027,Generation:0,CreationTimestamp:2020-01-06 13:49:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc0021da1d7 0xc0021da1d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021da240} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021da260}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.704: INFO: Pod "nginx-deployment-7b8c6f4498-s8dpt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-s8dpt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-s8dpt,UID:968ecc04-5078-4a55-9b66-f6f2bb653634,ResourceVersion:19526905,Generation:0,CreationTimestamp:2020-01-06 13:49:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc0021da2e7 0xc0021da2e8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021da360} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021da380}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:42 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:42 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.5,StartTime:2020-01-06 13:49:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-06 13:49:41 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://d6819e138fc74fe5d3070ea265a8a1c6417b7fbebd070b0b5358d8d9eeb2eaa3}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.705: INFO: Pod "nginx-deployment-7b8c6f4498-wb7gx" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wb7gx,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-wb7gx,UID:baa2cd09-b00a-4004-b9af-20c36b441e05,ResourceVersion:19527030,Generation:0,CreationTimestamp:2020-01-06 13:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc0021da457 0xc0021da458}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021da4c0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021da4e0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.705: INFO: Pod "nginx-deployment-7b8c6f4498-wg5nt" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-wg5nt,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-wg5nt,UID:d1693d69-58a9-4076-bd11-867dda095bb6,ResourceVersion:19526882,Generation:0,CreationTimestamp:2020-01-06 13:49:17 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc0021da567 0xc0021da568}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021da5d0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021da5f0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:17 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:38 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:38 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:17 +0000 UTC  }],Message:,Reason:,HostIP:10.96.2.216,PodIP:10.32.0.6,StartTime:2020-01-06 13:49:17 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-06 13:49:37 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://6d47fc7de26041977811d4502f48f0f22a3792b3c127b16afede54c8812b39da}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.705: INFO: Pod "nginx-deployment-7b8c6f4498-xln29" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xln29,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-xln29,UID:bd157f0c-c1ed-4e68-818d-af3809c7a5f6,ResourceVersion:19527038,Generation:0,CreationTimestamp:2020-01-06 13:49:51 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc0021da6c7 0xc0021da6c8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-server-sfge57q7djm7,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021da730} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021da750}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:51 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
Jan  6 13:49:55.706: INFO: Pod "nginx-deployment-7b8c6f4498-xtv9s" is not available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:nginx-deployment-7b8c6f4498-xtv9s,GenerateName:nginx-deployment-7b8c6f4498-,Namespace:deployment-8220,SelfLink:/api/v1/namespaces/deployment-8220/pods/nginx-deployment-7b8c6f4498-xtv9s,UID:e1cec7e0-4e18-48fb-af38-1df9e7c63bb8,ResourceVersion:19527002,Generation:0,CreationTimestamp:2020-01-06 13:49:50 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: nginx,pod-template-hash: 7b8c6f4498,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet nginx-deployment-7b8c6f4498 fdd9ab75-7ead-4734-968a-1328e475e89d 0xc0021da7d7 0xc0021da7d8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-tvqwc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-tvqwc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-tvqwc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0021da850} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0021da870}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Pending,Conditions:[{PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 13:49:50 +0000 UTC  }],Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:49:55.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8220" for this suite.
Jan  6 13:50:45.221: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:50:45.375: INFO: namespace deployment-8220 deletion completed in 48.245651029s

• [SLOW TEST:88.346 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:50:45.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting the proxy server
Jan  6 13:50:45.839: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:50:45.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-763" for this suite.
Jan  6 13:50:52.032: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:50:52.162: INFO: namespace kubectl-763 deletion completed in 6.155756519s

• [SLOW TEST:6.785 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:50:52.163: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  6 13:50:52.299: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  6 13:50:52.319: INFO: Waiting for terminating namespaces to be deleted...
Jan  6 13:50:52.321: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  6 13:50:52.337: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  6 13:50:52.337: INFO: 	Container weave ready: true, restart count 0
Jan  6 13:50:52.337: INFO: 	Container weave-npc ready: true, restart count 0
Jan  6 13:50:52.337: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  6 13:50:52.337: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  6 13:50:52.337: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  6 13:50:52.347: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  6 13:50:52.347: INFO: 	Container etcd ready: true, restart count 0
Jan  6 13:50:52.347: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  6 13:50:52.347: INFO: 	Container weave ready: true, restart count 0
Jan  6 13:50:52.347: INFO: 	Container weave-npc ready: true, restart count 0
Jan  6 13:50:52.347: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  6 13:50:52.347: INFO: 	Container coredns ready: true, restart count 0
Jan  6 13:50:52.347: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  6 13:50:52.347: INFO: 	Container kube-controller-manager ready: true, restart count 18
Jan  6 13:50:52.347: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  6 13:50:52.347: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  6 13:50:52.347: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  6 13:50:52.347: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan  6 13:50:52.347: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  6 13:50:52.347: INFO: 	Container kube-scheduler ready: true, restart count 12
Jan  6 13:50:52.347: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  6 13:50:52.347: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: verifying the node has the label node iruya-node
STEP: verifying the node has the label node iruya-server-sfge57q7djm7
Jan  6 13:50:52.463: INFO: Pod coredns-5c98db65d4-bm4gs requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  6 13:50:52.463: INFO: Pod coredns-5c98db65d4-xx8w8 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  6 13:50:52.463: INFO: Pod etcd-iruya-server-sfge57q7djm7 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan  6 13:50:52.463: INFO: Pod kube-apiserver-iruya-server-sfge57q7djm7 requesting resource cpu=250m on Node iruya-server-sfge57q7djm7
Jan  6 13:50:52.463: INFO: Pod kube-controller-manager-iruya-server-sfge57q7djm7 requesting resource cpu=200m on Node iruya-server-sfge57q7djm7
Jan  6 13:50:52.463: INFO: Pod kube-proxy-58v95 requesting resource cpu=0m on Node iruya-server-sfge57q7djm7
Jan  6 13:50:52.463: INFO: Pod kube-proxy-976zl requesting resource cpu=0m on Node iruya-node
Jan  6 13:50:52.463: INFO: Pod kube-scheduler-iruya-server-sfge57q7djm7 requesting resource cpu=100m on Node iruya-server-sfge57q7djm7
Jan  6 13:50:52.463: INFO: Pod weave-net-bzl4d requesting resource cpu=20m on Node iruya-server-sfge57q7djm7
Jan  6 13:50:52.463: INFO: Pod weave-net-rlp57 requesting resource cpu=20m on Node iruya-node
STEP: Starting Pods to consume most of the cluster CPU.
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1b5fa259-d112-443b-ab84-6d7c7a4be01f.15e75073c0aee6e0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4686/filler-pod-1b5fa259-d112-443b-ab84-6d7c7a4be01f to iruya-server-sfge57q7djm7]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1b5fa259-d112-443b-ab84-6d7c7a4be01f.15e75074e338599d], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1b5fa259-d112-443b-ab84-6d7c7a4be01f.15e75075e9c206ce], Reason = [Created], Message = [Created container filler-pod-1b5fa259-d112-443b-ab84-6d7c7a4be01f]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-1b5fa259-d112-443b-ab84-6d7c7a4be01f.15e750760c45196d], Reason = [Started], Message = [Started container filler-pod-1b5fa259-d112-443b-ab84-6d7c7a4be01f]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-554a82af-6db9-4b4b-85b2-b0653fca5f55.15e75073bff86687], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4686/filler-pod-554a82af-6db9-4b4b-85b2-b0653fca5f55 to iruya-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-554a82af-6db9-4b4b-85b2-b0653fca5f55.15e75074fd3a4000], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-554a82af-6db9-4b4b-85b2-b0653fca5f55.15e75075db5d3b7b], Reason = [Created], Message = [Created container filler-pod-554a82af-6db9-4b4b-85b2-b0653fca5f55]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-554a82af-6db9-4b4b-85b2-b0653fca5f55.15e7507600d7d100], Reason = [Started], Message = [Started container filler-pod-554a82af-6db9-4b4b-85b2-b0653fca5f55]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e750768f5c909b], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node iruya-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node iruya-server-sfge57q7djm7
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:51:05.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-4686" for this suite.
Jan  6 13:51:13.694: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:51:13.875: INFO: namespace sched-pred-4686 deletion completed in 8.214832959s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:21.713 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:51:13.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  6 13:51:15.401: INFO: Waiting up to 5m0s for pod "pod-5656728c-bb2f-4577-8916-d83f0f418282" in namespace "emptydir-4554" to be "success or failure"
Jan  6 13:51:15.430: INFO: Pod "pod-5656728c-bb2f-4577-8916-d83f0f418282": Phase="Pending", Reason="", readiness=false. Elapsed: 28.83889ms
Jan  6 13:51:17.485: INFO: Pod "pod-5656728c-bb2f-4577-8916-d83f0f418282": Phase="Pending", Reason="", readiness=false. Elapsed: 2.084317162s
Jan  6 13:51:19.555: INFO: Pod "pod-5656728c-bb2f-4577-8916-d83f0f418282": Phase="Pending", Reason="", readiness=false. Elapsed: 4.154169279s
Jan  6 13:51:21.573: INFO: Pod "pod-5656728c-bb2f-4577-8916-d83f0f418282": Phase="Pending", Reason="", readiness=false. Elapsed: 6.172049529s
Jan  6 13:51:23.581: INFO: Pod "pod-5656728c-bb2f-4577-8916-d83f0f418282": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.180567851s
STEP: Saw pod success
Jan  6 13:51:23.581: INFO: Pod "pod-5656728c-bb2f-4577-8916-d83f0f418282" satisfied condition "success or failure"
Jan  6 13:51:23.587: INFO: Trying to get logs from node iruya-node pod pod-5656728c-bb2f-4577-8916-d83f0f418282 container test-container: 
STEP: delete the pod
Jan  6 13:51:23.701: INFO: Waiting for pod pod-5656728c-bb2f-4577-8916-d83f0f418282 to disappear
Jan  6 13:51:23.709: INFO: Pod pod-5656728c-bb2f-4577-8916-d83f0f418282 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:51:23.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4554" for this suite.
Jan  6 13:51:29.742: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:51:29.893: INFO: namespace emptydir-4554 deletion completed in 6.176110886s

• [SLOW TEST:16.016 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:51:29.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-0f4d47d8-ce75-496b-be69-5a1da41b8292
STEP: Creating a pod to test consume configMaps
Jan  6 13:51:30.075: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-946e4732-5cbc-4085-9c59-5eb58c2b25fd" in namespace "projected-901" to be "success or failure"
Jan  6 13:51:30.083: INFO: Pod "pod-projected-configmaps-946e4732-5cbc-4085-9c59-5eb58c2b25fd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.227119ms
Jan  6 13:51:32.100: INFO: Pod "pod-projected-configmaps-946e4732-5cbc-4085-9c59-5eb58c2b25fd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024968732s
Jan  6 13:51:34.109: INFO: Pod "pod-projected-configmaps-946e4732-5cbc-4085-9c59-5eb58c2b25fd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034072064s
Jan  6 13:51:36.119: INFO: Pod "pod-projected-configmaps-946e4732-5cbc-4085-9c59-5eb58c2b25fd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043260751s
Jan  6 13:51:38.128: INFO: Pod "pod-projected-configmaps-946e4732-5cbc-4085-9c59-5eb58c2b25fd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052508166s
STEP: Saw pod success
Jan  6 13:51:38.128: INFO: Pod "pod-projected-configmaps-946e4732-5cbc-4085-9c59-5eb58c2b25fd" satisfied condition "success or failure"
Jan  6 13:51:38.135: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-946e4732-5cbc-4085-9c59-5eb58c2b25fd container projected-configmap-volume-test: 
STEP: delete the pod
Jan  6 13:51:38.223: INFO: Waiting for pod pod-projected-configmaps-946e4732-5cbc-4085-9c59-5eb58c2b25fd to disappear
Jan  6 13:51:38.275: INFO: Pod pod-projected-configmaps-946e4732-5cbc-4085-9c59-5eb58c2b25fd no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:51:38.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-901" for this suite.
Jan  6 13:51:44.313: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:51:44.421: INFO: namespace projected-901 deletion completed in 6.13645317s

• [SLOW TEST:14.528 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:51:44.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-a265f08a-d27a-4742-8b8e-1a9cb402351d
STEP: Creating a pod to test consume secrets
Jan  6 13:51:44.623: INFO: Waiting up to 5m0s for pod "pod-secrets-56d70b8c-f587-4e6a-a505-9d24ec2ff9b9" in namespace "secrets-4819" to be "success or failure"
Jan  6 13:51:44.638: INFO: Pod "pod-secrets-56d70b8c-f587-4e6a-a505-9d24ec2ff9b9": Phase="Pending", Reason="", readiness=false. Elapsed: 15.02771ms
Jan  6 13:51:46.649: INFO: Pod "pod-secrets-56d70b8c-f587-4e6a-a505-9d24ec2ff9b9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02558317s
Jan  6 13:51:48.657: INFO: Pod "pod-secrets-56d70b8c-f587-4e6a-a505-9d24ec2ff9b9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033852715s
Jan  6 13:51:50.682: INFO: Pod "pod-secrets-56d70b8c-f587-4e6a-a505-9d24ec2ff9b9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059102576s
Jan  6 13:51:52.694: INFO: Pod "pod-secrets-56d70b8c-f587-4e6a-a505-9d24ec2ff9b9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070296316s
STEP: Saw pod success
Jan  6 13:51:52.694: INFO: Pod "pod-secrets-56d70b8c-f587-4e6a-a505-9d24ec2ff9b9" satisfied condition "success or failure"
Jan  6 13:51:52.701: INFO: Trying to get logs from node iruya-node pod pod-secrets-56d70b8c-f587-4e6a-a505-9d24ec2ff9b9 container secret-volume-test: 
STEP: delete the pod
Jan  6 13:51:52.833: INFO: Waiting for pod pod-secrets-56d70b8c-f587-4e6a-a505-9d24ec2ff9b9 to disappear
Jan  6 13:51:52.850: INFO: Pod pod-secrets-56d70b8c-f587-4e6a-a505-9d24ec2ff9b9 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:51:52.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4819" for this suite.
Jan  6 13:51:58.957: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:51:59.132: INFO: namespace secrets-4819 deletion completed in 6.266760019s

• [SLOW TEST:14.711 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:51:59.133: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  6 13:51:59.194: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:52:16.179: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-4936" for this suite.
Jan  6 13:52:40.226: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:52:40.371: INFO: namespace init-container-4936 deletion completed in 24.170285995s

• [SLOW TEST:41.239 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:52:40.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  6 13:52:40.465: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan  6 13:52:40.530: INFO: Number of nodes with available pods: 0
Jan  6 13:52:40.530: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan  6 13:52:40.615: INFO: Number of nodes with available pods: 0
Jan  6 13:52:40.615: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:41.628: INFO: Number of nodes with available pods: 0
Jan  6 13:52:41.628: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:42.624: INFO: Number of nodes with available pods: 0
Jan  6 13:52:42.624: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:43.632: INFO: Number of nodes with available pods: 0
Jan  6 13:52:43.632: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:44.624: INFO: Number of nodes with available pods: 0
Jan  6 13:52:44.624: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:45.626: INFO: Number of nodes with available pods: 0
Jan  6 13:52:45.626: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:46.625: INFO: Number of nodes with available pods: 0
Jan  6 13:52:46.625: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:47.624: INFO: Number of nodes with available pods: 1
Jan  6 13:52:47.624: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan  6 13:52:47.700: INFO: Number of nodes with available pods: 1
Jan  6 13:52:47.700: INFO: Number of running nodes: 0, number of available pods: 1
Jan  6 13:52:48.725: INFO: Number of nodes with available pods: 0
Jan  6 13:52:48.726: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan  6 13:52:48.807: INFO: Number of nodes with available pods: 0
Jan  6 13:52:48.808: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:49.814: INFO: Number of nodes with available pods: 0
Jan  6 13:52:49.814: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:50.815: INFO: Number of nodes with available pods: 0
Jan  6 13:52:50.815: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:51.821: INFO: Number of nodes with available pods: 0
Jan  6 13:52:51.821: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:52.817: INFO: Number of nodes with available pods: 0
Jan  6 13:52:52.817: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:53.820: INFO: Number of nodes with available pods: 0
Jan  6 13:52:53.820: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:54.817: INFO: Number of nodes with available pods: 0
Jan  6 13:52:54.817: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:55.819: INFO: Number of nodes with available pods: 0
Jan  6 13:52:55.819: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:56.822: INFO: Number of nodes with available pods: 0
Jan  6 13:52:56.823: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:57.821: INFO: Number of nodes with available pods: 0
Jan  6 13:52:57.821: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:58.816: INFO: Number of nodes with available pods: 0
Jan  6 13:52:58.816: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:52:59.823: INFO: Number of nodes with available pods: 0
Jan  6 13:52:59.823: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:53:00.817: INFO: Number of nodes with available pods: 0
Jan  6 13:53:00.817: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:53:01.815: INFO: Number of nodes with available pods: 0
Jan  6 13:53:01.815: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:53:02.816: INFO: Number of nodes with available pods: 0
Jan  6 13:53:02.816: INFO: Node iruya-node is running more than one daemon pod
Jan  6 13:53:03.817: INFO: Number of nodes with available pods: 1
Jan  6 13:53:03.817: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-425, will wait for the garbage collector to delete the pods
Jan  6 13:53:03.903: INFO: Deleting DaemonSet.extensions daemon-set took: 9.620114ms
Jan  6 13:53:04.204: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.619384ms
Jan  6 13:53:16.610: INFO: Number of nodes with available pods: 0
Jan  6 13:53:16.610: INFO: Number of running nodes: 0, number of available pods: 0
Jan  6 13:53:16.612: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-425/daemonsets","resourceVersion":"19527713"},"items":null}

Jan  6 13:53:16.614: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-425/pods","resourceVersion":"19527713"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:53:16.655: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-425" for this suite.
Jan  6 13:53:22.682: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:53:22.805: INFO: namespace daemonsets-425 deletion completed in 6.146050001s

• [SLOW TEST:42.434 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:53:22.806: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  6 13:53:23.030: INFO: Waiting up to 5m0s for pod "downward-api-a4e16603-ed7f-4d6b-a50b-c49ddb9b8330" in namespace "downward-api-7163" to be "success or failure"
Jan  6 13:53:23.056: INFO: Pod "downward-api-a4e16603-ed7f-4d6b-a50b-c49ddb9b8330": Phase="Pending", Reason="", readiness=false. Elapsed: 25.597872ms
Jan  6 13:53:25.068: INFO: Pod "downward-api-a4e16603-ed7f-4d6b-a50b-c49ddb9b8330": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037368108s
Jan  6 13:53:27.075: INFO: Pod "downward-api-a4e16603-ed7f-4d6b-a50b-c49ddb9b8330": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044770595s
Jan  6 13:53:29.088: INFO: Pod "downward-api-a4e16603-ed7f-4d6b-a50b-c49ddb9b8330": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057627745s
Jan  6 13:53:31.104: INFO: Pod "downward-api-a4e16603-ed7f-4d6b-a50b-c49ddb9b8330": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073899118s
Jan  6 13:53:33.112: INFO: Pod "downward-api-a4e16603-ed7f-4d6b-a50b-c49ddb9b8330": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.081764895s
STEP: Saw pod success
Jan  6 13:53:33.112: INFO: Pod "downward-api-a4e16603-ed7f-4d6b-a50b-c49ddb9b8330" satisfied condition "success or failure"
Jan  6 13:53:33.116: INFO: Trying to get logs from node iruya-node pod downward-api-a4e16603-ed7f-4d6b-a50b-c49ddb9b8330 container dapi-container: 
STEP: delete the pod
Jan  6 13:53:33.332: INFO: Waiting for pod downward-api-a4e16603-ed7f-4d6b-a50b-c49ddb9b8330 to disappear
Jan  6 13:53:33.339: INFO: Pod downward-api-a4e16603-ed7f-4d6b-a50b-c49ddb9b8330 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:53:33.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7163" for this suite.
Jan  6 13:53:39.372: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:53:39.525: INFO: namespace downward-api-7163 deletion completed in 6.17855253s

• [SLOW TEST:16.719 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:53:39.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan  6 13:53:57.700: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  6 13:53:57.709: INFO: Pod pod-with-prestop-http-hook still exists
Jan  6 13:53:59.709: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  6 13:53:59.715: INFO: Pod pod-with-prestop-http-hook still exists
Jan  6 13:54:01.709: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  6 13:54:01.724: INFO: Pod pod-with-prestop-http-hook still exists
Jan  6 13:54:03.709: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan  6 13:54:03.721: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:54:03.755: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-9772" for this suite.
Jan  6 13:54:25.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:54:25.988: INFO: namespace container-lifecycle-hook-9772 deletion completed in 22.227265311s

• [SLOW TEST:46.462 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:54:25.989: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test hostPath mode
Jan  6 13:54:26.067: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-1744" to be "success or failure"
Jan  6 13:54:26.086: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 18.436683ms
Jan  6 13:54:28.096: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028609363s
Jan  6 13:54:30.112: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04433532s
Jan  6 13:54:32.120: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052521143s
Jan  6 13:54:34.148: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.080608039s
Jan  6 13:54:36.156: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088241821s
STEP: Saw pod success
Jan  6 13:54:36.156: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan  6 13:54:36.158: INFO: Trying to get logs from node iruya-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan  6 13:54:36.415: INFO: Waiting for pod pod-host-path-test to disappear
Jan  6 13:54:36.428: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:54:36.428: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-1744" for this suite.
Jan  6 13:54:42.527: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:54:42.663: INFO: namespace hostpath-1744 deletion completed in 6.201711909s

• [SLOW TEST:16.674 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:54:42.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan  6 13:54:42.792: INFO: Pod name pod-release: Found 0 pods out of 1
Jan  6 13:54:47.801: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:54:48.890: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4969" for this suite.
Jan  6 13:54:54.926: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:54:55.032: INFO: namespace replication-controller-4969 deletion completed in 6.133599104s

• [SLOW TEST:12.369 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:54:55.033: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1210
STEP: creating the pod
Jan  6 13:54:55.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6680'
Jan  6 13:54:55.614: INFO: stderr: ""
Jan  6 13:54:55.615: INFO: stdout: "pod/pause created\n"
Jan  6 13:54:55.615: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Jan  6 13:54:55.615: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-6680" to be "running and ready"
Jan  6 13:54:55.650: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 34.6632ms
Jan  6 13:54:57.666: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051573055s
Jan  6 13:54:59.721: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106479135s
Jan  6 13:55:01.735: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120549953s
Jan  6 13:55:03.751: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 8.135656217s
Jan  6 13:55:05.758: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 10.143302826s
Jan  6 13:55:05.758: INFO: Pod "pause" satisfied condition "running and ready"
Jan  6 13:55:05.758: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: adding the label testing-label with value testing-label-value to a pod
Jan  6 13:55:05.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-6680'
Jan  6 13:55:06.003: INFO: stderr: ""
Jan  6 13:55:06.003: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Jan  6 13:55:06.003: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6680'
Jan  6 13:55:06.104: INFO: stderr: ""
Jan  6 13:55:06.104: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   testing-label-value\n"
STEP: removing the label testing-label of a pod
Jan  6 13:55:06.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-6680'
Jan  6 13:55:06.210: INFO: stderr: ""
Jan  6 13:55:06.210: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Jan  6 13:55:06.211: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-6680'
Jan  6 13:55:06.337: INFO: stderr: ""
Jan  6 13:55:06.337: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          11s   \n"
[AfterEach] [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1217
STEP: using delete to clean up resources
Jan  6 13:55:06.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6680'
Jan  6 13:55:06.520: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  6 13:55:06.520: INFO: stdout: "pod \"pause\" force deleted\n"
Jan  6 13:55:06.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-6680'
Jan  6 13:55:06.660: INFO: stderr: "No resources found.\n"
Jan  6 13:55:06.660: INFO: stdout: ""
Jan  6 13:55:06.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-6680 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  6 13:55:06.793: INFO: stderr: ""
Jan  6 13:55:06.793: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:55:06.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6680" for this suite.
Jan  6 13:55:12.834: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:55:12.977: INFO: namespace kubectl-6680 deletion completed in 6.168522164s

• [SLOW TEST:17.944 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:55:12.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:55:21.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-666" for this suite.
Jan  6 13:55:27.256: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:55:27.446: INFO: namespace kubelet-test-666 deletion completed in 6.254947518s

• [SLOW TEST:14.469 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:55:27.447: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's command
Jan  6 13:55:27.539: INFO: Waiting up to 5m0s for pod "var-expansion-13ebc952-aaf9-4700-a9f8-b87a27b1ecdb" in namespace "var-expansion-145" to be "success or failure"
Jan  6 13:55:27.627: INFO: Pod "var-expansion-13ebc952-aaf9-4700-a9f8-b87a27b1ecdb": Phase="Pending", Reason="", readiness=false. Elapsed: 88.082804ms
Jan  6 13:55:29.642: INFO: Pod "var-expansion-13ebc952-aaf9-4700-a9f8-b87a27b1ecdb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102100232s
Jan  6 13:55:31.649: INFO: Pod "var-expansion-13ebc952-aaf9-4700-a9f8-b87a27b1ecdb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.109182573s
Jan  6 13:55:33.667: INFO: Pod "var-expansion-13ebc952-aaf9-4700-a9f8-b87a27b1ecdb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127344627s
Jan  6 13:55:35.677: INFO: Pod "var-expansion-13ebc952-aaf9-4700-a9f8-b87a27b1ecdb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.137453611s
STEP: Saw pod success
Jan  6 13:55:35.677: INFO: Pod "var-expansion-13ebc952-aaf9-4700-a9f8-b87a27b1ecdb" satisfied condition "success or failure"
Jan  6 13:55:35.698: INFO: Trying to get logs from node iruya-node pod var-expansion-13ebc952-aaf9-4700-a9f8-b87a27b1ecdb container dapi-container: 
STEP: delete the pod
Jan  6 13:55:35.762: INFO: Waiting for pod var-expansion-13ebc952-aaf9-4700-a9f8-b87a27b1ecdb to disappear
Jan  6 13:55:35.856: INFO: Pod var-expansion-13ebc952-aaf9-4700-a9f8-b87a27b1ecdb no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:55:35.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-145" for this suite.
Jan  6 13:55:41.896: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:55:42.040: INFO: namespace var-expansion-145 deletion completed in 6.175033266s

• [SLOW TEST:14.593 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:55:42.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service multi-endpoint-test in namespace services-5088
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5088 to expose endpoints map[]
Jan  6 13:55:42.276: INFO: Get endpoints failed (69.03439ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Jan  6 13:55:43.300: INFO: successfully validated that service multi-endpoint-test in namespace services-5088 exposes endpoints map[] (1.092831798s elapsed)
STEP: Creating pod pod1 in namespace services-5088
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5088 to expose endpoints map[pod1:[100]]
Jan  6 13:55:47.605: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.278575278s elapsed, will retry)
Jan  6 13:55:49.645: INFO: successfully validated that service multi-endpoint-test in namespace services-5088 exposes endpoints map[pod1:[100]] (6.318780365s elapsed)
STEP: Creating pod pod2 in namespace services-5088
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5088 to expose endpoints map[pod1:[100] pod2:[101]]
Jan  6 13:55:54.376: INFO: Unexpected endpoints: found map[b07e7db9-77d1-4d69-9fee-76c7d02a5926:[100]], expected map[pod1:[100] pod2:[101]] (4.721491988s elapsed, will retry)
Jan  6 13:55:56.405: INFO: successfully validated that service multi-endpoint-test in namespace services-5088 exposes endpoints map[pod1:[100] pod2:[101]] (6.750813141s elapsed)
STEP: Deleting pod pod1 in namespace services-5088
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5088 to expose endpoints map[pod2:[101]]
Jan  6 13:55:57.484: INFO: successfully validated that service multi-endpoint-test in namespace services-5088 exposes endpoints map[pod2:[101]] (1.071118288s elapsed)
STEP: Deleting pod pod2 in namespace services-5088
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-5088 to expose endpoints map[]
Jan  6 13:55:58.548: INFO: successfully validated that service multi-endpoint-test in namespace services-5088 exposes endpoints map[] (1.056393461s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:55:59.742: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5088" for this suite.
Jan  6 13:56:21.903: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:56:22.050: INFO: namespace services-5088 deletion completed in 22.291702464s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:40.010 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:56:22.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-a7acc8b0-f494-4dd5-b894-4ff2b412a7be
STEP: Creating a pod to test consume secrets
Jan  6 13:56:22.206: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-017eba43-7a5e-47a7-83cb-f14ffeed4547" in namespace "projected-9788" to be "success or failure"
Jan  6 13:56:22.211: INFO: Pod "pod-projected-secrets-017eba43-7a5e-47a7-83cb-f14ffeed4547": Phase="Pending", Reason="", readiness=false. Elapsed: 4.835056ms
Jan  6 13:56:24.222: INFO: Pod "pod-projected-secrets-017eba43-7a5e-47a7-83cb-f14ffeed4547": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016139155s
Jan  6 13:56:26.236: INFO: Pod "pod-projected-secrets-017eba43-7a5e-47a7-83cb-f14ffeed4547": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029939823s
Jan  6 13:56:28.249: INFO: Pod "pod-projected-secrets-017eba43-7a5e-47a7-83cb-f14ffeed4547": Phase="Pending", Reason="", readiness=false. Elapsed: 6.043267584s
Jan  6 13:56:30.261: INFO: Pod "pod-projected-secrets-017eba43-7a5e-47a7-83cb-f14ffeed4547": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055748193s
Jan  6 13:56:32.276: INFO: Pod "pod-projected-secrets-017eba43-7a5e-47a7-83cb-f14ffeed4547": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.070008488s
STEP: Saw pod success
Jan  6 13:56:32.276: INFO: Pod "pod-projected-secrets-017eba43-7a5e-47a7-83cb-f14ffeed4547" satisfied condition "success or failure"
Jan  6 13:56:32.280: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-017eba43-7a5e-47a7-83cb-f14ffeed4547 container projected-secret-volume-test: 
STEP: delete the pod
Jan  6 13:56:32.441: INFO: Waiting for pod pod-projected-secrets-017eba43-7a5e-47a7-83cb-f14ffeed4547 to disappear
Jan  6 13:56:32.448: INFO: Pod pod-projected-secrets-017eba43-7a5e-47a7-83cb-f14ffeed4547 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 13:56:32.449: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9788" for this suite.
Jan  6 13:56:38.500: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 13:56:38.675: INFO: namespace projected-9788 deletion completed in 6.218479052s

• [SLOW TEST:16.625 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 13:56:38.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  6 13:59:39.977: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 13:59:40.022: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 13:59:42.023: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 13:59:42.039: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 13:59:44.023: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 13:59:44.051: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 13:59:46.023: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 13:59:46.055: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 13:59:48.022: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 13:59:48.032: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 13:59:50.023: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 13:59:50.054: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 13:59:52.022: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 13:59:52.032: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 13:59:54.022: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 13:59:54.034: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 13:59:56.022: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 13:59:56.029: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 13:59:58.022: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 13:59:58.034: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 14:00:00.022: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 14:00:00.037: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 14:00:02.022: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 14:00:02.042: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 14:00:04.022: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 14:00:04.030: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 14:00:06.022: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 14:00:06.031: INFO: Pod pod-with-poststart-exec-hook still exists
Jan  6 14:00:08.022: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan  6 14:00:08.062: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:00:08.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7082" for this suite.
Jan  6 14:00:32.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:00:32.315: INFO: namespace container-lifecycle-hook-7082 deletion completed in 24.242571062s

• [SLOW TEST:233.640 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:00:32.317: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-map-a171bdd0-736a-4b15-8562-3b0cc203ec2c
STEP: Creating a pod to test consume configMaps
Jan  6 14:00:32.498: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-058f5037-806d-49da-bb51-6b692a9a31cd" in namespace "projected-16" to be "success or failure"
Jan  6 14:00:32.526: INFO: Pod "pod-projected-configmaps-058f5037-806d-49da-bb51-6b692a9a31cd": Phase="Pending", Reason="", readiness=false. Elapsed: 28.226168ms
Jan  6 14:00:34.543: INFO: Pod "pod-projected-configmaps-058f5037-806d-49da-bb51-6b692a9a31cd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044804865s
Jan  6 14:00:36.576: INFO: Pod "pod-projected-configmaps-058f5037-806d-49da-bb51-6b692a9a31cd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077414766s
Jan  6 14:00:38.597: INFO: Pod "pod-projected-configmaps-058f5037-806d-49da-bb51-6b692a9a31cd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09859537s
Jan  6 14:00:40.605: INFO: Pod "pod-projected-configmaps-058f5037-806d-49da-bb51-6b692a9a31cd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106511431s
STEP: Saw pod success
Jan  6 14:00:40.605: INFO: Pod "pod-projected-configmaps-058f5037-806d-49da-bb51-6b692a9a31cd" satisfied condition "success or failure"
Jan  6 14:00:40.609: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-058f5037-806d-49da-bb51-6b692a9a31cd container projected-configmap-volume-test: 
STEP: delete the pod
Jan  6 14:00:40.767: INFO: Waiting for pod pod-projected-configmaps-058f5037-806d-49da-bb51-6b692a9a31cd to disappear
Jan  6 14:00:40.780: INFO: Pod pod-projected-configmaps-058f5037-806d-49da-bb51-6b692a9a31cd no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:00:40.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-16" for this suite.
Jan  6 14:00:46.815: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:00:47.010: INFO: namespace projected-16 deletion completed in 6.218924732s

• [SLOW TEST:14.693 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:00:47.011: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-e27784ff-9165-42a1-ad00-8aabd6bc385e
STEP: Creating configMap with name cm-test-opt-upd-51e3109d-9767-43e7-b6df-e793a44cb293
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-e27784ff-9165-42a1-ad00-8aabd6bc385e
STEP: Updating configmap cm-test-opt-upd-51e3109d-9767-43e7-b6df-e793a44cb293
STEP: Creating configMap with name cm-test-opt-create-c7641241-a3df-4ae4-8878-ce1640b1a9cc
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:02:21.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4370" for this suite.
Jan  6 14:02:43.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:02:43.516: INFO: namespace projected-4370 deletion completed in 22.21456981s

• [SLOW TEST:116.506 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:02:43.517: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1516
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  6 14:02:43.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-rc --image=docker.io/library/nginx:1.14-alpine --generator=run/v1 --namespace=kubectl-5394'
Jan  6 14:02:46.375: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  6 14:02:46.375: INFO: stdout: "replicationcontroller/e2e-test-nginx-rc created\n"
STEP: verifying the rc e2e-test-nginx-rc was created
STEP: rolling-update to same image controller
Jan  6 14:02:46.450: INFO: scanned /root for discovery docs: 
Jan  6 14:02:46.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-nginx-rc --update-period=1s --image=docker.io/library/nginx:1.14-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5394'
Jan  6 14:03:06.718: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  6 14:03:06.718: INFO: stdout: "Created e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f\nScaling up e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
Jan  6 14:03:06.718: INFO: stdout: "Created e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f\nScaling up e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f from 0 to 1, scaling down e2e-test-nginx-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f up to 1\nScaling e2e-test-nginx-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-nginx-rc\nRenaming e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f to e2e-test-nginx-rc\nreplicationcontroller/e2e-test-nginx-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-nginx-rc pods to come up.
Jan  6 14:03:06.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:03:06.869: INFO: stderr: ""
Jan  6 14:03:06.869: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:03:11.869: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:03:12.099: INFO: stderr: ""
Jan  6 14:03:12.099: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:03:17.100: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:03:17.277: INFO: stderr: ""
Jan  6 14:03:17.277: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:03:22.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:03:22.452: INFO: stderr: ""
Jan  6 14:03:22.452: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:03:27.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:03:27.640: INFO: stderr: ""
Jan  6 14:03:27.641: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:03:32.642: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:03:32.840: INFO: stderr: ""
Jan  6 14:03:32.840: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:03:37.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:03:38.003: INFO: stderr: ""
Jan  6 14:03:38.003: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:03:43.004: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:03:43.159: INFO: stderr: ""
Jan  6 14:03:43.160: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:03:48.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:03:48.344: INFO: stderr: ""
Jan  6 14:03:48.345: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:03:53.345: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:03:53.523: INFO: stderr: ""
Jan  6 14:03:53.523: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:03:58.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:03:58.698: INFO: stderr: ""
Jan  6 14:03:58.698: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:04:03.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:04:03.891: INFO: stderr: ""
Jan  6 14:04:03.891: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:04:08.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:04:09.070: INFO: stderr: ""
Jan  6 14:04:09.070: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:04:14.070: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:04:14.220: INFO: stderr: ""
Jan  6 14:04:14.220: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:04:19.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:04:19.442: INFO: stderr: ""
Jan  6 14:04:19.442: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:04:24.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:04:24.629: INFO: stderr: ""
Jan  6 14:04:24.629: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:04:29.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:04:29.831: INFO: stderr: ""
Jan  6 14:04:29.831: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:04:34.831: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:04:34.980: INFO: stderr: ""
Jan  6 14:04:34.980: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw e2e-test-nginx-rc-wkcdg "
STEP: Replicas for run=e2e-test-nginx-rc: expected=1 actual=2
Jan  6 14:04:39.980: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:04:40.143: INFO: stderr: ""
Jan  6 14:04:40.144: INFO: stdout: "e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw "
Jan  6 14:04:40.144: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-nginx-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5394'
Jan  6 14:04:40.245: INFO: stderr: ""
Jan  6 14:04:40.245: INFO: stdout: "true"
Jan  6 14:04:40.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-nginx-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5394'
Jan  6 14:04:40.376: INFO: stderr: ""
Jan  6 14:04:40.376: INFO: stdout: "docker.io/library/nginx:1.14-alpine"
Jan  6 14:04:40.377: INFO: e2e-test-nginx-rc-550e805c2f045638817203dcd63e376f-6dnrw is verified up and running
[AfterEach] [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1522
Jan  6 14:04:40.377: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-nginx-rc --namespace=kubectl-5394'
Jan  6 14:04:40.513: INFO: stderr: ""
Jan  6 14:04:40.513: INFO: stdout: "replicationcontroller \"e2e-test-nginx-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:04:40.513: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5394" for this suite.
Jan  6 14:05:02.582: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:05:02.714: INFO: namespace kubectl-5394 deletion completed in 22.156111409s

• [SLOW TEST:139.197 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:05:02.714: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  6 14:05:02.825: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a7756041-7321-42e1-80f8-8ab1a54fde6a" in namespace "downward-api-6348" to be "success or failure"
Jan  6 14:05:02.834: INFO: Pod "downwardapi-volume-a7756041-7321-42e1-80f8-8ab1a54fde6a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.803258ms
Jan  6 14:05:04.899: INFO: Pod "downwardapi-volume-a7756041-7321-42e1-80f8-8ab1a54fde6a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074369175s
Jan  6 14:05:06.926: INFO: Pod "downwardapi-volume-a7756041-7321-42e1-80f8-8ab1a54fde6a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.101262623s
Jan  6 14:05:08.934: INFO: Pod "downwardapi-volume-a7756041-7321-42e1-80f8-8ab1a54fde6a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108559632s
Jan  6 14:05:10.946: INFO: Pod "downwardapi-volume-a7756041-7321-42e1-80f8-8ab1a54fde6a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.120714094s
STEP: Saw pod success
Jan  6 14:05:10.946: INFO: Pod "downwardapi-volume-a7756041-7321-42e1-80f8-8ab1a54fde6a" satisfied condition "success or failure"
Jan  6 14:05:10.950: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-a7756041-7321-42e1-80f8-8ab1a54fde6a container client-container: 
STEP: delete the pod
Jan  6 14:05:11.070: INFO: Waiting for pod downwardapi-volume-a7756041-7321-42e1-80f8-8ab1a54fde6a to disappear
Jan  6 14:05:11.075: INFO: Pod downwardapi-volume-a7756041-7321-42e1-80f8-8ab1a54fde6a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:05:11.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6348" for this suite.
Jan  6 14:05:17.149: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:05:17.303: INFO: namespace downward-api-6348 deletion completed in 6.222951247s

• [SLOW TEST:14.589 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:05:17.304: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:05:17.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-831" for this suite.
Jan  6 14:05:23.607: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:05:23.762: INFO: namespace services-831 deletion completed in 6.182952235s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:6.458 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:05:23.763: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan  6 14:05:31.979: INFO: &Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:send-events-b8ece898-6d66-48c3-9e36-13247d282792,GenerateName:,Namespace:events-6906,SelfLink:/api/v1/namespaces/events-6906/pods/send-events-b8ece898-6d66-48c3-9e36-13247d282792,UID:fe5b0ba1-1b71-4266-9e36-b7e7dc8576c8,ResourceVersion:19529231,Generation:0,CreationTimestamp:2020-01-06 14:05:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: foo,time: 935674118,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-4sfsw {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-4sfsw,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{p gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 [] []  [{ 0 80 TCP }] [] [] {map[] map[]} [{default-token-4sfsw true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026d0cb0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026d0d70}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 14:05:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 14:05:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 14:05:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 14:05:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-06 14:05:23 +0000 UTC,ContainerStatuses:[{p {nil ContainerStateRunning{StartedAt:2020-01-06 14:05:30 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/serve-hostname:1.1 docker-pullable://gcr.io/kubernetes-e2e-test-images/serve-hostname@sha256:bab70473a6d8ef65a22625dc9a1b0f0452e811530fdbe77e4408523460177ff1 docker://073029b6fbe630dc754df1d187c6c0dd6588ea603844164a9a648735ecd4d8d6}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}

STEP: checking for scheduler event about the pod
Jan  6 14:05:33.985: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan  6 14:05:35.995: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:05:36.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6906" for this suite.
Jan  6 14:06:18.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:06:18.231: INFO: namespace events-6906 deletion completed in 42.171367882s

• [SLOW TEST:54.469 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:06:18.232: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-f93fa800-66bc-4149-aab0-842eac5c74b7
STEP: Creating a pod to test consume secrets
Jan  6 14:06:18.340: INFO: Waiting up to 5m0s for pod "pod-secrets-d6b5be3a-9aa5-4048-9c40-94f2cdc20e28" in namespace "secrets-187" to be "success or failure"
Jan  6 14:06:18.356: INFO: Pod "pod-secrets-d6b5be3a-9aa5-4048-9c40-94f2cdc20e28": Phase="Pending", Reason="", readiness=false. Elapsed: 15.252929ms
Jan  6 14:06:20.366: INFO: Pod "pod-secrets-d6b5be3a-9aa5-4048-9c40-94f2cdc20e28": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025070211s
Jan  6 14:06:22.372: INFO: Pod "pod-secrets-d6b5be3a-9aa5-4048-9c40-94f2cdc20e28": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031127942s
Jan  6 14:06:24.382: INFO: Pod "pod-secrets-d6b5be3a-9aa5-4048-9c40-94f2cdc20e28": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041992621s
Jan  6 14:06:26.441: INFO: Pod "pod-secrets-d6b5be3a-9aa5-4048-9c40-94f2cdc20e28": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.100828156s
STEP: Saw pod success
Jan  6 14:06:26.441: INFO: Pod "pod-secrets-d6b5be3a-9aa5-4048-9c40-94f2cdc20e28" satisfied condition "success or failure"
Jan  6 14:06:26.446: INFO: Trying to get logs from node iruya-node pod pod-secrets-d6b5be3a-9aa5-4048-9c40-94f2cdc20e28 container secret-volume-test: 
STEP: delete the pod
Jan  6 14:06:26.622: INFO: Waiting for pod pod-secrets-d6b5be3a-9aa5-4048-9c40-94f2cdc20e28 to disappear
Jan  6 14:06:26.633: INFO: Pod pod-secrets-d6b5be3a-9aa5-4048-9c40-94f2cdc20e28 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:06:26.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-187" for this suite.
Jan  6 14:06:32.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:06:32.757: INFO: namespace secrets-187 deletion completed in 6.117400474s

• [SLOW TEST:14.525 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:06:32.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on node default medium
Jan  6 14:06:32.851: INFO: Waiting up to 5m0s for pod "pod-21e479da-a97c-4002-a0fe-49a2b97b7e8d" in namespace "emptydir-1586" to be "success or failure"
Jan  6 14:06:32.896: INFO: Pod "pod-21e479da-a97c-4002-a0fe-49a2b97b7e8d": Phase="Pending", Reason="", readiness=false. Elapsed: 44.897744ms
Jan  6 14:06:34.917: INFO: Pod "pod-21e479da-a97c-4002-a0fe-49a2b97b7e8d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066493969s
Jan  6 14:06:36.929: INFO: Pod "pod-21e479da-a97c-4002-a0fe-49a2b97b7e8d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.078017786s
Jan  6 14:06:38.939: INFO: Pod "pod-21e479da-a97c-4002-a0fe-49a2b97b7e8d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088513236s
Jan  6 14:06:40.945: INFO: Pod "pod-21e479da-a97c-4002-a0fe-49a2b97b7e8d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094730325s
STEP: Saw pod success
Jan  6 14:06:40.945: INFO: Pod "pod-21e479da-a97c-4002-a0fe-49a2b97b7e8d" satisfied condition "success or failure"
Jan  6 14:06:40.952: INFO: Trying to get logs from node iruya-node pod pod-21e479da-a97c-4002-a0fe-49a2b97b7e8d container test-container: 
STEP: delete the pod
Jan  6 14:06:41.256: INFO: Waiting for pod pod-21e479da-a97c-4002-a0fe-49a2b97b7e8d to disappear
Jan  6 14:06:41.265: INFO: Pod pod-21e479da-a97c-4002-a0fe-49a2b97b7e8d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:06:41.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1586" for this suite.
Jan  6 14:06:47.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:06:47.464: INFO: namespace emptydir-1586 deletion completed in 6.192334705s

• [SLOW TEST:14.706 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:06:47.464: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-42024373-0878-42f1-8541-72ad8c78e2f1
STEP: Creating a pod to test consume configMaps
Jan  6 14:06:47.669: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4d2fcb5b-c1aa-43c7-bb56-f86d53c05a78" in namespace "projected-346" to be "success or failure"
Jan  6 14:06:47.690: INFO: Pod "pod-projected-configmaps-4d2fcb5b-c1aa-43c7-bb56-f86d53c05a78": Phase="Pending", Reason="", readiness=false. Elapsed: 21.474593ms
Jan  6 14:06:49.702: INFO: Pod "pod-projected-configmaps-4d2fcb5b-c1aa-43c7-bb56-f86d53c05a78": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032806906s
Jan  6 14:06:51.718: INFO: Pod "pod-projected-configmaps-4d2fcb5b-c1aa-43c7-bb56-f86d53c05a78": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049311988s
Jan  6 14:06:53.728: INFO: Pod "pod-projected-configmaps-4d2fcb5b-c1aa-43c7-bb56-f86d53c05a78": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059035255s
Jan  6 14:06:55.736: INFO: Pod "pod-projected-configmaps-4d2fcb5b-c1aa-43c7-bb56-f86d53c05a78": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06692703s
STEP: Saw pod success
Jan  6 14:06:55.736: INFO: Pod "pod-projected-configmaps-4d2fcb5b-c1aa-43c7-bb56-f86d53c05a78" satisfied condition "success or failure"
Jan  6 14:06:55.740: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-4d2fcb5b-c1aa-43c7-bb56-f86d53c05a78 container projected-configmap-volume-test: 
STEP: delete the pod
Jan  6 14:06:55.807: INFO: Waiting for pod pod-projected-configmaps-4d2fcb5b-c1aa-43c7-bb56-f86d53c05a78 to disappear
Jan  6 14:06:55.819: INFO: Pod pod-projected-configmaps-4d2fcb5b-c1aa-43c7-bb56-f86d53c05a78 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:06:55.819: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-346" for this suite.
Jan  6 14:07:01.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:07:02.184: INFO: namespace projected-346 deletion completed in 6.287855658s

• [SLOW TEST:14.720 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:07:02.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  6 14:07:02.385: INFO: Waiting up to 5m0s for pod "downwardapi-volume-81d450a0-87e0-4ee4-ba9f-0034d8195542" in namespace "projected-6408" to be "success or failure"
Jan  6 14:07:02.392: INFO: Pod "downwardapi-volume-81d450a0-87e0-4ee4-ba9f-0034d8195542": Phase="Pending", Reason="", readiness=false. Elapsed: 6.858754ms
Jan  6 14:07:04.403: INFO: Pod "downwardapi-volume-81d450a0-87e0-4ee4-ba9f-0034d8195542": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018164931s
Jan  6 14:07:06.425: INFO: Pod "downwardapi-volume-81d450a0-87e0-4ee4-ba9f-0034d8195542": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039948638s
Jan  6 14:07:08.449: INFO: Pod "downwardapi-volume-81d450a0-87e0-4ee4-ba9f-0034d8195542": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06362817s
Jan  6 14:07:10.462: INFO: Pod "downwardapi-volume-81d450a0-87e0-4ee4-ba9f-0034d8195542": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077025635s
STEP: Saw pod success
Jan  6 14:07:10.462: INFO: Pod "downwardapi-volume-81d450a0-87e0-4ee4-ba9f-0034d8195542" satisfied condition "success or failure"
Jan  6 14:07:10.469: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-81d450a0-87e0-4ee4-ba9f-0034d8195542 container client-container: 
STEP: delete the pod
Jan  6 14:07:10.673: INFO: Waiting for pod downwardapi-volume-81d450a0-87e0-4ee4-ba9f-0034d8195542 to disappear
Jan  6 14:07:10.686: INFO: Pod downwardapi-volume-81d450a0-87e0-4ee4-ba9f-0034d8195542 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:07:10.686: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6408" for this suite.
Jan  6 14:07:18.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:07:18.911: INFO: namespace projected-6408 deletion completed in 8.219182486s

• [SLOW TEST:16.727 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:07:18.911: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan  6 14:07:19.096: INFO: Waiting up to 5m0s for pod "pod-a14314d1-af16-4ee6-ae32-b8565fb1c1bc" in namespace "emptydir-6916" to be "success or failure"
Jan  6 14:07:19.105: INFO: Pod "pod-a14314d1-af16-4ee6-ae32-b8565fb1c1bc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.962233ms
Jan  6 14:07:21.113: INFO: Pod "pod-a14314d1-af16-4ee6-ae32-b8565fb1c1bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016977977s
Jan  6 14:07:23.130: INFO: Pod "pod-a14314d1-af16-4ee6-ae32-b8565fb1c1bc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033766917s
Jan  6 14:07:25.153: INFO: Pod "pod-a14314d1-af16-4ee6-ae32-b8565fb1c1bc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056805584s
Jan  6 14:07:27.170: INFO: Pod "pod-a14314d1-af16-4ee6-ae32-b8565fb1c1bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074002922s
STEP: Saw pod success
Jan  6 14:07:27.170: INFO: Pod "pod-a14314d1-af16-4ee6-ae32-b8565fb1c1bc" satisfied condition "success or failure"
Jan  6 14:07:27.177: INFO: Trying to get logs from node iruya-node pod pod-a14314d1-af16-4ee6-ae32-b8565fb1c1bc container test-container: 
STEP: delete the pod
Jan  6 14:07:27.410: INFO: Waiting for pod pod-a14314d1-af16-4ee6-ae32-b8565fb1c1bc to disappear
Jan  6 14:07:27.421: INFO: Pod pod-a14314d1-af16-4ee6-ae32-b8565fb1c1bc no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:07:27.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6916" for this suite.
Jan  6 14:07:33.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:07:33.685: INFO: namespace emptydir-6916 deletion completed in 6.256622753s

• [SLOW TEST:14.774 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:07:33.686: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:81
Jan  6 14:07:33.814: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan  6 14:07:33.827: INFO: Waiting for terminating namespaces to be deleted...
Jan  6 14:07:33.832: INFO: 
Logging pods the kubelet thinks is on node iruya-node before test
Jan  6 14:07:33.853: INFO: kube-proxy-976zl from kube-system started at 2019-08-04 09:01:39 +0000 UTC (1 container statuses recorded)
Jan  6 14:07:33.853: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  6 14:07:33.853: INFO: weave-net-rlp57 from kube-system started at 2019-10-12 11:56:39 +0000 UTC (2 container statuses recorded)
Jan  6 14:07:33.853: INFO: 	Container weave ready: true, restart count 0
Jan  6 14:07:33.853: INFO: 	Container weave-npc ready: true, restart count 0
Jan  6 14:07:33.853: INFO: 
Logging pods the kubelet thinks is on node iruya-server-sfge57q7djm7 before test
Jan  6 14:07:33.886: INFO: coredns-5c98db65d4-bm4gs from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  6 14:07:33.886: INFO: 	Container coredns ready: true, restart count 0
Jan  6 14:07:33.886: INFO: etcd-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:38 +0000 UTC (1 container statuses recorded)
Jan  6 14:07:33.886: INFO: 	Container etcd ready: true, restart count 0
Jan  6 14:07:33.886: INFO: weave-net-bzl4d from kube-system started at 2019-08-04 08:52:37 +0000 UTC (2 container statuses recorded)
Jan  6 14:07:33.886: INFO: 	Container weave ready: true, restart count 0
Jan  6 14:07:33.886: INFO: 	Container weave-npc ready: true, restart count 0
Jan  6 14:07:33.886: INFO: kube-controller-manager-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:42 +0000 UTC (1 container statuses recorded)
Jan  6 14:07:33.886: INFO: 	Container kube-controller-manager ready: true, restart count 18
Jan  6 14:07:33.886: INFO: kube-proxy-58v95 from kube-system started at 2019-08-04 08:52:37 +0000 UTC (1 container statuses recorded)
Jan  6 14:07:33.886: INFO: 	Container kube-proxy ready: true, restart count 0
Jan  6 14:07:33.886: INFO: kube-apiserver-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:39 +0000 UTC (1 container statuses recorded)
Jan  6 14:07:33.886: INFO: 	Container kube-apiserver ready: true, restart count 0
Jan  6 14:07:33.886: INFO: kube-scheduler-iruya-server-sfge57q7djm7 from kube-system started at 2019-08-04 08:51:43 +0000 UTC (1 container statuses recorded)
Jan  6 14:07:33.886: INFO: 	Container kube-scheduler ready: true, restart count 12
Jan  6 14:07:33.886: INFO: coredns-5c98db65d4-xx8w8 from kube-system started at 2019-08-04 08:53:12 +0000 UTC (1 container statuses recorded)
Jan  6 14:07:33.886: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e7515cec4616cc], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:07:34.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6126" for this suite.
Jan  6 14:07:41.004: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:07:41.133: INFO: namespace sched-pred-6126 deletion completed in 6.164547138s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:72

• [SLOW TEST:7.448 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:07:41.134: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  6 14:07:41.207: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:07:49.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9727" for this suite.
Jan  6 14:08:37.676: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:08:37.826: INFO: namespace pods-9727 deletion completed in 48.174149826s

• [SLOW TEST:56.693 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:08:37.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76
Jan  6 14:08:37.951: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Registering the sample API server.
Jan  6 14:08:38.499: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan  6 14:08:40.814: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  6 14:08:42.832: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  6 14:08:44.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  6 14:08:46.828: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  6 14:08:48.823: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713916518, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-7c4bdb86cc\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  6 14:08:56.063: INFO: Waited 5.218174603s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:08:56.661: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-5737" for this suite.
Jan  6 14:09:02.741: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:09:02.856: INFO: namespace aggregator-5737 deletion completed in 6.160761116s

• [SLOW TEST:25.029 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:09:02.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan  6 14:09:03.058: INFO: Number of nodes with available pods: 0
Jan  6 14:09:03.058: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:04.074: INFO: Number of nodes with available pods: 0
Jan  6 14:09:04.074: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:05.073: INFO: Number of nodes with available pods: 0
Jan  6 14:09:05.073: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:06.072: INFO: Number of nodes with available pods: 0
Jan  6 14:09:06.072: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:07.094: INFO: Number of nodes with available pods: 0
Jan  6 14:09:07.094: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:09.425: INFO: Number of nodes with available pods: 0
Jan  6 14:09:09.425: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:10.069: INFO: Number of nodes with available pods: 0
Jan  6 14:09:10.069: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:11.152: INFO: Number of nodes with available pods: 0
Jan  6 14:09:11.152: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:12.075: INFO: Number of nodes with available pods: 0
Jan  6 14:09:12.075: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:13.070: INFO: Number of nodes with available pods: 1
Jan  6 14:09:13.070: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:14.073: INFO: Number of nodes with available pods: 2
Jan  6 14:09:14.073: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan  6 14:09:14.147: INFO: Number of nodes with available pods: 1
Jan  6 14:09:14.147: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:15.161: INFO: Number of nodes with available pods: 1
Jan  6 14:09:15.161: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:16.161: INFO: Number of nodes with available pods: 1
Jan  6 14:09:16.161: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:17.166: INFO: Number of nodes with available pods: 1
Jan  6 14:09:17.166: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:18.155: INFO: Number of nodes with available pods: 1
Jan  6 14:09:18.155: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:19.174: INFO: Number of nodes with available pods: 1
Jan  6 14:09:19.174: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:20.168: INFO: Number of nodes with available pods: 1
Jan  6 14:09:20.168: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:21.160: INFO: Number of nodes with available pods: 1
Jan  6 14:09:21.160: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:22.167: INFO: Number of nodes with available pods: 1
Jan  6 14:09:22.167: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:23.160: INFO: Number of nodes with available pods: 1
Jan  6 14:09:23.160: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:24.158: INFO: Number of nodes with available pods: 1
Jan  6 14:09:24.158: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:25.159: INFO: Number of nodes with available pods: 1
Jan  6 14:09:25.159: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:26.163: INFO: Number of nodes with available pods: 1
Jan  6 14:09:26.163: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:27.163: INFO: Number of nodes with available pods: 1
Jan  6 14:09:27.163: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:28.165: INFO: Number of nodes with available pods: 1
Jan  6 14:09:28.165: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:29.170: INFO: Number of nodes with available pods: 1
Jan  6 14:09:29.170: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:30.164: INFO: Number of nodes with available pods: 1
Jan  6 14:09:30.164: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:31.163: INFO: Number of nodes with available pods: 1
Jan  6 14:09:31.163: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:32.175: INFO: Number of nodes with available pods: 1
Jan  6 14:09:32.175: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:33.177: INFO: Number of nodes with available pods: 1
Jan  6 14:09:33.177: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:34.161: INFO: Number of nodes with available pods: 1
Jan  6 14:09:34.161: INFO: Node iruya-node is running more than one daemon pod
Jan  6 14:09:35.161: INFO: Number of nodes with available pods: 2
Jan  6 14:09:35.161: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7153, will wait for the garbage collector to delete the pods
Jan  6 14:09:35.241: INFO: Deleting DaemonSet.extensions daemon-set took: 21.505155ms
Jan  6 14:09:35.542: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.516476ms
Jan  6 14:09:47.953: INFO: Number of nodes with available pods: 0
Jan  6 14:09:47.953: INFO: Number of running nodes: 0, number of available pods: 0
Jan  6 14:09:47.959: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7153/daemonsets","resourceVersion":"19529878"},"items":null}

Jan  6 14:09:47.963: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7153/pods","resourceVersion":"19529878"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:09:47.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-7153" for this suite.
Jan  6 14:09:54.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:09:54.103: INFO: namespace daemonsets-7153 deletion completed in 6.12268038s

• [SLOW TEST:51.245 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:09:54.103: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  6 14:10:02.331: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:10:02.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6085" for this suite.
Jan  6 14:10:08.598: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:10:08.704: INFO: namespace container-runtime-6085 deletion completed in 6.175448755s

• [SLOW TEST:14.601 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:10:08.704: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  6 14:10:08.839: INFO: Waiting up to 5m0s for pod "pod-d3aeb667-3349-4781-a574-8b9aa4a1f9a0" in namespace "emptydir-153" to be "success or failure"
Jan  6 14:10:08.848: INFO: Pod "pod-d3aeb667-3349-4781-a574-8b9aa4a1f9a0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.352188ms
Jan  6 14:10:10.856: INFO: Pod "pod-d3aeb667-3349-4781-a574-8b9aa4a1f9a0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016095334s
Jan  6 14:10:12.872: INFO: Pod "pod-d3aeb667-3349-4781-a574-8b9aa4a1f9a0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03302038s
Jan  6 14:10:14.888: INFO: Pod "pod-d3aeb667-3349-4781-a574-8b9aa4a1f9a0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048314249s
Jan  6 14:10:16.902: INFO: Pod "pod-d3aeb667-3349-4781-a574-8b9aa4a1f9a0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062780118s
STEP: Saw pod success
Jan  6 14:10:16.902: INFO: Pod "pod-d3aeb667-3349-4781-a574-8b9aa4a1f9a0" satisfied condition "success or failure"
Jan  6 14:10:16.909: INFO: Trying to get logs from node iruya-node pod pod-d3aeb667-3349-4781-a574-8b9aa4a1f9a0 container test-container: 
STEP: delete the pod
Jan  6 14:10:17.067: INFO: Waiting for pod pod-d3aeb667-3349-4781-a574-8b9aa4a1f9a0 to disappear
Jan  6 14:10:17.072: INFO: Pod pod-d3aeb667-3349-4781-a574-8b9aa4a1f9a0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:10:17.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-153" for this suite.
Jan  6 14:10:23.103: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:10:23.227: INFO: namespace emptydir-153 deletion completed in 6.149307086s

• [SLOW TEST:14.523 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:10:23.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-32bc3fe7-e8a5-4a87-afa4-8f66844260ed
STEP: Creating a pod to test consume configMaps
Jan  6 14:10:23.316: INFO: Waiting up to 5m0s for pod "pod-configmaps-4c94408d-2b28-453a-990f-a74d52222824" in namespace "configmap-9155" to be "success or failure"
Jan  6 14:10:23.323: INFO: Pod "pod-configmaps-4c94408d-2b28-453a-990f-a74d52222824": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130492ms
Jan  6 14:10:25.331: INFO: Pod "pod-configmaps-4c94408d-2b28-453a-990f-a74d52222824": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014096195s
Jan  6 14:10:27.354: INFO: Pod "pod-configmaps-4c94408d-2b28-453a-990f-a74d52222824": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036983981s
Jan  6 14:10:29.365: INFO: Pod "pod-configmaps-4c94408d-2b28-453a-990f-a74d52222824": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048803994s
Jan  6 14:10:31.380: INFO: Pod "pod-configmaps-4c94408d-2b28-453a-990f-a74d52222824": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.06307412s
STEP: Saw pod success
Jan  6 14:10:31.380: INFO: Pod "pod-configmaps-4c94408d-2b28-453a-990f-a74d52222824" satisfied condition "success or failure"
Jan  6 14:10:31.386: INFO: Trying to get logs from node iruya-node pod pod-configmaps-4c94408d-2b28-453a-990f-a74d52222824 container configmap-volume-test: 
STEP: delete the pod
Jan  6 14:10:31.490: INFO: Waiting for pod pod-configmaps-4c94408d-2b28-453a-990f-a74d52222824 to disappear
Jan  6 14:10:31.581: INFO: Pod pod-configmaps-4c94408d-2b28-453a-990f-a74d52222824 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:10:31.581: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9155" for this suite.
Jan  6 14:10:37.628: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:10:37.753: INFO: namespace configmap-9155 deletion completed in 6.163600577s

• [SLOW TEST:14.526 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:10:37.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the initial replication controller
Jan  6 14:10:37.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1526'
Jan  6 14:10:38.406: INFO: stderr: ""
Jan  6 14:10:38.406: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  6 14:10:38.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1526'
Jan  6 14:10:38.767: INFO: stderr: ""
Jan  6 14:10:38.767: INFO: stdout: "update-demo-nautilus-6m5zv update-demo-nautilus-jv7km "
Jan  6 14:10:38.768: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m5zv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1526'
Jan  6 14:10:39.061: INFO: stderr: ""
Jan  6 14:10:39.061: INFO: stdout: ""
Jan  6 14:10:39.061: INFO: update-demo-nautilus-6m5zv is created but not running
Jan  6 14:10:44.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1526'
Jan  6 14:10:45.460: INFO: stderr: ""
Jan  6 14:10:45.460: INFO: stdout: "update-demo-nautilus-6m5zv update-demo-nautilus-jv7km "
Jan  6 14:10:45.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m5zv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1526'
Jan  6 14:10:45.750: INFO: stderr: ""
Jan  6 14:10:45.750: INFO: stdout: ""
Jan  6 14:10:45.750: INFO: update-demo-nautilus-6m5zv is created but not running
Jan  6 14:10:50.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1526'
Jan  6 14:10:50.999: INFO: stderr: ""
Jan  6 14:10:50.999: INFO: stdout: "update-demo-nautilus-6m5zv update-demo-nautilus-jv7km "
Jan  6 14:10:50.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m5zv -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1526'
Jan  6 14:10:51.079: INFO: stderr: ""
Jan  6 14:10:51.079: INFO: stdout: "true"
Jan  6 14:10:51.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-6m5zv -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1526'
Jan  6 14:10:51.184: INFO: stderr: ""
Jan  6 14:10:51.184: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  6 14:10:51.184: INFO: validating pod update-demo-nautilus-6m5zv
Jan  6 14:10:51.190: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  6 14:10:51.191: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  6 14:10:51.191: INFO: update-demo-nautilus-6m5zv is verified up and running
Jan  6 14:10:51.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jv7km -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1526'
Jan  6 14:10:51.278: INFO: stderr: ""
Jan  6 14:10:51.278: INFO: stdout: "true"
Jan  6 14:10:51.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jv7km -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1526'
Jan  6 14:10:51.450: INFO: stderr: ""
Jan  6 14:10:51.450: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  6 14:10:51.450: INFO: validating pod update-demo-nautilus-jv7km
Jan  6 14:10:51.473: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  6 14:10:51.473: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  6 14:10:51.473: INFO: update-demo-nautilus-jv7km is verified up and running
STEP: rolling-update to new replication controller
Jan  6 14:10:51.474: INFO: scanned /root for discovery docs: 
Jan  6 14:10:51.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-1526'
Jan  6 14:11:21.092: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan  6 14:11:21.092: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  6 14:11:21.093: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1526'
Jan  6 14:11:21.261: INFO: stderr: ""
Jan  6 14:11:21.261: INFO: stdout: "update-demo-kitten-l89df update-demo-kitten-ll2r8 update-demo-nautilus-jv7km "
STEP: Replicas for name=update-demo: expected=2 actual=3
Jan  6 14:11:26.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1526'
Jan  6 14:11:26.508: INFO: stderr: ""
Jan  6 14:11:26.508: INFO: stdout: "update-demo-kitten-l89df update-demo-kitten-ll2r8 update-demo-nautilus-jv7km "
STEP: Replicas for name=update-demo: expected=2 actual=3
Jan  6 14:11:31.509: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1526'
Jan  6 14:11:31.687: INFO: stderr: ""
Jan  6 14:11:31.687: INFO: stdout: "update-demo-kitten-l89df update-demo-kitten-ll2r8 "
Jan  6 14:11:31.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-l89df -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1526'
Jan  6 14:11:31.870: INFO: stderr: ""
Jan  6 14:11:31.870: INFO: stdout: "true"
Jan  6 14:11:31.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-l89df -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1526'
Jan  6 14:11:31.997: INFO: stderr: ""
Jan  6 14:11:31.997: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  6 14:11:31.997: INFO: validating pod update-demo-kitten-l89df
Jan  6 14:11:32.024: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  6 14:11:32.024: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  6 14:11:32.024: INFO: update-demo-kitten-l89df is verified up and running
Jan  6 14:11:32.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ll2r8 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1526'
Jan  6 14:11:32.222: INFO: stderr: ""
Jan  6 14:11:32.222: INFO: stdout: "true"
Jan  6 14:11:32.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-ll2r8 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1526'
Jan  6 14:11:32.380: INFO: stderr: ""
Jan  6 14:11:32.380: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Jan  6 14:11:32.380: INFO: validating pod update-demo-kitten-ll2r8
Jan  6 14:11:32.390: INFO: got data: {
  "image": "kitten.jpg"
}

Jan  6 14:11:32.390: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Jan  6 14:11:32.390: INFO: update-demo-kitten-ll2r8 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:11:32.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1526" for this suite.
Jan  6 14:11:58.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:11:58.553: INFO: namespace kubectl-1526 deletion completed in 26.155998201s

• [SLOW TEST:80.800 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:11:58.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: executing a command with run --rm and attach with stdin
Jan  6 14:11:58.685: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6769 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Jan  6 14:12:07.170: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0106 14:12:05.930195    3244 log.go:172] (0xc0009e8210) (0xc0009be280) Create stream\nI0106 14:12:05.932293    3244 log.go:172] (0xc0009e8210) (0xc0009be280) Stream added, broadcasting: 1\nI0106 14:12:05.957397    3244 log.go:172] (0xc0009e8210) Reply frame received for 1\nI0106 14:12:05.957549    3244 log.go:172] (0xc0009e8210) (0xc0006d40a0) Create stream\nI0106 14:12:05.957566    3244 log.go:172] (0xc0009e8210) (0xc0006d40a0) Stream added, broadcasting: 3\nI0106 14:12:05.961716    3244 log.go:172] (0xc0009e8210) Reply frame received for 3\nI0106 14:12:05.961951    3244 log.go:172] (0xc0009e8210) (0xc0009be000) Create stream\nI0106 14:12:05.961970    3244 log.go:172] (0xc0009e8210) (0xc0009be000) Stream added, broadcasting: 5\nI0106 14:12:05.964914    3244 log.go:172] (0xc0009e8210) Reply frame received for 5\nI0106 14:12:05.964981    3244 log.go:172] (0xc0009e8210) (0xc0001c4000) Create stream\nI0106 14:12:05.964995    3244 log.go:172] (0xc0009e8210) (0xc0001c4000) Stream added, broadcasting: 7\nI0106 14:12:05.966714    3244 log.go:172] (0xc0009e8210) Reply frame received for 7\nI0106 14:12:05.967440    3244 log.go:172] (0xc0006d40a0) (3) Writing data frame\nI0106 14:12:05.967956    3244 log.go:172] (0xc0006d40a0) (3) Writing data frame\nI0106 14:12:05.979386    3244 log.go:172] (0xc0009e8210) Data frame received for 5\nI0106 14:12:05.979411    3244 log.go:172] (0xc0009be000) (5) Data frame handling\nI0106 14:12:05.979423    3244 log.go:172] (0xc0009be000) (5) Data frame sent\nI0106 14:12:05.985607    3244 log.go:172] (0xc0009e8210) Data frame received for 5\nI0106 14:12:05.985631    3244 log.go:172] (0xc0009be000) (5) Data frame handling\nI0106 14:12:05.985647    3244 log.go:172] (0xc0009be000) (5) Data frame sent\nI0106 14:12:07.121501    3244 log.go:172] (0xc0009e8210) (0xc0006d40a0) Stream removed, broadcasting: 3\nI0106 14:12:07.121910    3244 log.go:172] (0xc0009e8210) (0xc0009be000) Stream removed, broadcasting: 5\nI0106 14:12:07.122265    3244 log.go:172] (0xc0009e8210) Data frame received for 1\nI0106 14:12:07.122301    3244 log.go:172] (0xc0009be280) (1) Data frame handling\nI0106 14:12:07.122337    3244 log.go:172] (0xc0009be280) (1) Data frame sent\nI0106 14:12:07.122467    3244 log.go:172] (0xc0009e8210) (0xc0001c4000) Stream removed, broadcasting: 7\nI0106 14:12:07.122877    3244 log.go:172] (0xc0009e8210) (0xc0009be280) Stream removed, broadcasting: 1\nI0106 14:12:07.123173    3244 log.go:172] (0xc0009e8210) Go away received\nI0106 14:12:07.123659    3244 log.go:172] (0xc0009e8210) (0xc0009be280) Stream removed, broadcasting: 1\nI0106 14:12:07.123688    3244 log.go:172] (0xc0009e8210) (0xc0006d40a0) Stream removed, broadcasting: 3\nI0106 14:12:07.123708    3244 log.go:172] (0xc0009e8210) (0xc0009be000) Stream removed, broadcasting: 5\nI0106 14:12:07.123726    3244 log.go:172] (0xc0009e8210) (0xc0001c4000) Stream removed, broadcasting: 7\n"
Jan  6 14:12:07.170: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:12:09.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6769" for this suite.
Jan  6 14:12:15.263: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:12:15.450: INFO: namespace kubectl-6769 deletion completed in 6.260960865s

• [SLOW TEST:16.896 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:12:15.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name s-test-opt-del-87c30d79-d95d-4ecc-9032-dc89753feb36
STEP: Creating secret with name s-test-opt-upd-96fe5cb3-12e5-48ac-8107-7c2a3e2bb532
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-87c30d79-d95d-4ecc-9032-dc89753feb36
STEP: Updating secret s-test-opt-upd-96fe5cb3-12e5-48ac-8107-7c2a3e2bb532
STEP: Creating secret with name s-test-opt-create-e131e384-32c9-45e2-8721-250cfffa5aec
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:12:29.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-631" for this suite.
Jan  6 14:12:53.942: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:12:54.024: INFO: namespace secrets-631 deletion completed in 24.10535966s

• [SLOW TEST:38.574 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:12:54.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:13:54.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8886" for this suite.
Jan  6 14:14:16.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:14:16.400: INFO: namespace container-probe-8886 deletion completed in 22.172971003s

• [SLOW TEST:82.375 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:14:16.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  6 14:14:16.542: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:14:29.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-631" for this suite.
Jan  6 14:14:36.023: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:14:36.145: INFO: namespace init-container-631 deletion completed in 6.156027329s

• [SLOW TEST:19.745 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:14:36.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  6 14:14:44.877: INFO: Successfully updated pod "labelsupdate9cbf1865-dc5d-4a1b-b527-aace2e5bdc6a"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:14:46.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2765" for this suite.
Jan  6 14:15:09.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:15:09.142: INFO: namespace projected-2765 deletion completed in 22.172565725s

• [SLOW TEST:32.997 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:15:09.143: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test use defaults
Jan  6 14:15:09.250: INFO: Waiting up to 5m0s for pod "client-containers-70488896-f52f-422d-bdb9-709ec8b3f0e4" in namespace "containers-743" to be "success or failure"
Jan  6 14:15:09.275: INFO: Pod "client-containers-70488896-f52f-422d-bdb9-709ec8b3f0e4": Phase="Pending", Reason="", readiness=false. Elapsed: 24.292975ms
Jan  6 14:15:11.288: INFO: Pod "client-containers-70488896-f52f-422d-bdb9-709ec8b3f0e4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037406195s
Jan  6 14:15:13.298: INFO: Pod "client-containers-70488896-f52f-422d-bdb9-709ec8b3f0e4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047599052s
Jan  6 14:15:15.317: INFO: Pod "client-containers-70488896-f52f-422d-bdb9-709ec8b3f0e4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066362976s
Jan  6 14:15:17.323: INFO: Pod "client-containers-70488896-f52f-422d-bdb9-709ec8b3f0e4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072745906s
STEP: Saw pod success
Jan  6 14:15:17.323: INFO: Pod "client-containers-70488896-f52f-422d-bdb9-709ec8b3f0e4" satisfied condition "success or failure"
Jan  6 14:15:17.327: INFO: Trying to get logs from node iruya-node pod client-containers-70488896-f52f-422d-bdb9-709ec8b3f0e4 container test-container: 
STEP: delete the pod
Jan  6 14:15:17.404: INFO: Waiting for pod client-containers-70488896-f52f-422d-bdb9-709ec8b3f0e4 to disappear
Jan  6 14:15:17.412: INFO: Pod client-containers-70488896-f52f-422d-bdb9-709ec8b3f0e4 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:15:17.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-743" for this suite.
Jan  6 14:15:23.526: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:15:23.731: INFO: namespace containers-743 deletion completed in 6.30580776s

• [SLOW TEST:14.588 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:15:23.731: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: getting the auto-created API token
Jan  6 14:15:24.402: INFO: created pod pod-service-account-defaultsa
Jan  6 14:15:24.402: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan  6 14:15:24.467: INFO: created pod pod-service-account-mountsa
Jan  6 14:15:24.467: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan  6 14:15:24.494: INFO: created pod pod-service-account-nomountsa
Jan  6 14:15:24.494: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan  6 14:15:24.505: INFO: created pod pod-service-account-defaultsa-mountspec
Jan  6 14:15:24.505: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan  6 14:15:24.529: INFO: created pod pod-service-account-mountsa-mountspec
Jan  6 14:15:24.529: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan  6 14:15:24.553: INFO: created pod pod-service-account-nomountsa-mountspec
Jan  6 14:15:24.553: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan  6 14:15:24.647: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan  6 14:15:24.647: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan  6 14:15:24.711: INFO: created pod pod-service-account-mountsa-nomountspec
Jan  6 14:15:24.711: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan  6 14:15:24.828: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan  6 14:15:24.828: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:15:24.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-7251" for this suite.
Jan  6 14:15:50.972: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:15:51.050: INFO: namespace svcaccounts-7251 deletion completed in 26.20720749s

• [SLOW TEST:27.318 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:15:51.050: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: validating cluster-info
Jan  6 14:15:51.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan  6 14:15:53.110: INFO: stderr: ""
Jan  6 14:15:53.110: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.57:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:15:53.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6328" for this suite.
Jan  6 14:15:59.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:15:59.267: INFO: namespace kubectl-6328 deletion completed in 6.148248706s

• [SLOW TEST:8.217 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:15:59.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:15:59.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7511" for this suite.
Jan  6 14:16:05.554: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:16:05.667: INFO: namespace kubelet-test-7511 deletion completed in 6.202618691s

• [SLOW TEST:6.401 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:16:05.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  6 14:16:05.819: INFO: (0) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 17.374691ms)
Jan  6 14:16:05.828: INFO: (1) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.330102ms)
Jan  6 14:16:05.834: INFO: (2) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.838124ms)
Jan  6 14:16:05.842: INFO: (3) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.615571ms)
Jan  6 14:16:05.852: INFO: (4) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.942199ms)
Jan  6 14:16:05.880: INFO: (5) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 27.989211ms)
Jan  6 14:16:05.890: INFO: (6) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.521736ms)
Jan  6 14:16:05.904: INFO: (7) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.774594ms)
Jan  6 14:16:05.912: INFO: (8) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.387446ms)
Jan  6 14:16:05.918: INFO: (9) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.094381ms)
Jan  6 14:16:05.927: INFO: (10) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.907748ms)
Jan  6 14:16:05.933: INFO: (11) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.695095ms)
Jan  6 14:16:05.941: INFO: (12) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.642321ms)
Jan  6 14:16:05.948: INFO: (13) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.410375ms)
Jan  6 14:16:05.953: INFO: (14) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.289792ms)
Jan  6 14:16:05.959: INFO: (15) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.908448ms)
Jan  6 14:16:06.022: INFO: (16) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 63.250159ms)
Jan  6 14:16:06.029: INFO: (17) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.246533ms)
Jan  6 14:16:06.035: INFO: (18) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.701035ms)
Jan  6 14:16:06.042: INFO: (19) /api/v1/nodes/iruya-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.308586ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:16:06.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-8981" for this suite.
Jan  6 14:16:12.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:16:12.334: INFO: namespace proxy-8981 deletion completed in 6.285998289s

• [SLOW TEST:6.666 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:16:12.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:88
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating service endpoint-test2 in namespace services-5936
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5936 to expose endpoints map[]
Jan  6 14:16:12.564: INFO: Get endpoints failed (11.574124ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Jan  6 14:16:13.574: INFO: successfully validated that service endpoint-test2 in namespace services-5936 exposes endpoints map[] (1.021209383s elapsed)
STEP: Creating pod pod1 in namespace services-5936
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5936 to expose endpoints map[pod1:[80]]
Jan  6 14:16:17.756: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.162198573s elapsed, will retry)
Jan  6 14:16:20.795: INFO: successfully validated that service endpoint-test2 in namespace services-5936 exposes endpoints map[pod1:[80]] (7.200761797s elapsed)
STEP: Creating pod pod2 in namespace services-5936
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5936 to expose endpoints map[pod1:[80] pod2:[80]]
Jan  6 14:16:26.006: INFO: Unexpected endpoints: found map[40f87a57-42ad-4859-bba6-6e92663039d1:[80]], expected map[pod1:[80] pod2:[80]] (5.202875515s elapsed, will retry)
Jan  6 14:16:29.055: INFO: successfully validated that service endpoint-test2 in namespace services-5936 exposes endpoints map[pod1:[80] pod2:[80]] (8.251848852s elapsed)
STEP: Deleting pod pod1 in namespace services-5936
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5936 to expose endpoints map[pod2:[80]]
Jan  6 14:16:30.228: INFO: successfully validated that service endpoint-test2 in namespace services-5936 exposes endpoints map[pod2:[80]] (1.165420882s elapsed)
STEP: Deleting pod pod2 in namespace services-5936
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5936 to expose endpoints map[]
Jan  6 14:16:31.264: INFO: successfully validated that service endpoint-test2 in namespace services-5936 exposes endpoints map[] (1.025446611s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:16:31.986: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5936" for this suite.
Jan  6 14:16:38.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:16:38.167: INFO: namespace services-5936 deletion completed in 6.174734717s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:92

• [SLOW TEST:25.833 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:16:38.168: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-322e35c9-2adb-47c9-afbf-d1008dd22303 in namespace container-probe-7973
Jan  6 14:16:46.264: INFO: Started pod liveness-322e35c9-2adb-47c9-afbf-d1008dd22303 in namespace container-probe-7973
STEP: checking the pod's current state and verifying that restartCount is present
Jan  6 14:16:46.270: INFO: Initial restart count of pod liveness-322e35c9-2adb-47c9-afbf-d1008dd22303 is 0
Jan  6 14:17:07.384: INFO: Restart count of pod container-probe-7973/liveness-322e35c9-2adb-47c9-afbf-d1008dd22303 is now 1 (21.113449477s elapsed)
Jan  6 14:17:27.507: INFO: Restart count of pod container-probe-7973/liveness-322e35c9-2adb-47c9-afbf-d1008dd22303 is now 2 (41.236801493s elapsed)
Jan  6 14:17:47.617: INFO: Restart count of pod container-probe-7973/liveness-322e35c9-2adb-47c9-afbf-d1008dd22303 is now 3 (1m1.346429937s elapsed)
Jan  6 14:18:05.722: INFO: Restart count of pod container-probe-7973/liveness-322e35c9-2adb-47c9-afbf-d1008dd22303 is now 4 (1m19.452210256s elapsed)
Jan  6 14:19:08.150: INFO: Restart count of pod container-probe-7973/liveness-322e35c9-2adb-47c9-afbf-d1008dd22303 is now 5 (2m21.87972079s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:19:08.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7973" for this suite.
Jan  6 14:19:14.229: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:19:14.339: INFO: namespace container-probe-7973 deletion completed in 6.132504836s

• [SLOW TEST:156.172 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:19:14.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  6 14:19:14.395: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan  6 14:19:14.454: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan  6 14:19:19.467: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  6 14:19:23.526: INFO: Creating deployment "test-rolling-update-deployment"
Jan  6 14:19:23.536: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan  6 14:19:23.552: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan  6 14:19:25.565: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan  6 14:19:25.569: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713917163, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713917163, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713917163, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713917163, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  6 14:19:27.577: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713917163, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713917163, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713917163, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713917163, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  6 14:19:29.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713917163, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713917163, loc:(*time.Location)(0x7ea48a0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63713917163, loc:(*time.Location)(0x7ea48a0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63713917163, loc:(*time.Location)(0x7ea48a0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-79f6b9d75c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan  6 14:19:31.596: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  6 14:19:31.620: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment,GenerateName:,Namespace:deployment-3084,SelfLink:/apis/apps/v1/namespaces/deployment-3084/deployments/test-rolling-update-deployment,UID:54168eec-0cbb-4478-981a-35884fe2da6b,ResourceVersion:19531334,Generation:1,CreationTimestamp:2020-01-06 14:19:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[{Available True 2020-01-06 14:19:23 +0000 UTC 2020-01-06 14:19:23 +0000 UTC MinimumReplicasAvailable Deployment has minimum availability.} {Progressing True 2020-01-06 14:19:31 +0000 UTC 2020-01-06 14:19:23 +0000 UTC NewReplicaSetAvailable ReplicaSet "test-rolling-update-deployment-79f6b9d75c" has successfully progressed.}],ReadyReplicas:1,CollisionCount:nil,},}

Jan  6 14:19:31.626: INFO: New ReplicaSet "test-rolling-update-deployment-79f6b9d75c" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c,GenerateName:,Namespace:deployment-3084,SelfLink:/apis/apps/v1/namespaces/deployment-3084/replicasets/test-rolling-update-deployment-79f6b9d75c,UID:263be0c8-094d-49ea-8b71-79cabadc4ee1,ResourceVersion:19531323,Generation:1,CreationTimestamp:2020-01-06 14:19:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305833,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 54168eec-0cbb-4478-981a-35884fe2da6b 0xc00172cfc7 0xc00172cfc8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  6 14:19:31.626: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan  6 14:19:31.626: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-controller,GenerateName:,Namespace:deployment-3084,SelfLink:/apis/apps/v1/namespaces/deployment-3084/replicasets/test-rolling-update-controller,UID:df03b07b-1a40-4f79-bc24-8c6531e52140,ResourceVersion:19531333,Generation:2,CreationTimestamp:2020-01-06 14:19:14 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{deployment.kubernetes.io/desired-replicas: 1,deployment.kubernetes.io/max-replicas: 2,deployment.kubernetes.io/revision: 3546343826724305832,},OwnerReferences:[{apps/v1 Deployment test-rolling-update-deployment 54168eec-0cbb-4478-981a-35884fe2da6b 0xc00172cee7 0xc00172cee8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*0,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[],},}
Jan  6 14:19:31.630: INFO: Pod "test-rolling-update-deployment-79f6b9d75c-7wgdb" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-rolling-update-deployment-79f6b9d75c-7wgdb,GenerateName:test-rolling-update-deployment-79f6b9d75c-,Namespace:deployment-3084,SelfLink:/api/v1/namespaces/deployment-3084/pods/test-rolling-update-deployment-79f6b9d75c-7wgdb,UID:513e149b-d57f-46be-86be-4248235efa06,ResourceVersion:19531322,Generation:0,CreationTimestamp:2020-01-06 14:19:23 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: sample-pod,pod-template-hash: 79f6b9d75c,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-rolling-update-deployment-79f6b9d75c 263be0c8-094d-49ea-8b71-79cabadc4ee1 0xc00172dad7 0xc00172dad8}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-48x8t {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-48x8t,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [{default-token-48x8t true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc00172dbd0} {node.kubernetes.io/unreachable Exists  NoExecute 0xc00172dbf0}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 14:19:23 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 14:19:30 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 14:19:30 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 14:19:23 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.2,StartTime:2020-01-06 14:19:23 +0000 UTC,ContainerStatuses:[{redis {nil ContainerStateRunning{StartedAt:2020-01-06 14:19:30 +0000 UTC,} nil} {nil nil nil} true 0 gcr.io/kubernetes-e2e-test-images/redis:1.0 docker-pullable://gcr.io/kubernetes-e2e-test-images/redis@sha256:af4748d1655c08dc54d4be5182135395db9ce87aba2d4699b26b14ae197c5830 docker://e93d5e363759fa9edf791607b04d10673af236b8c50e404b92ff756a352c10bd}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:19:31.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-3084" for this suite.
Jan  6 14:19:37.673: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:19:37.831: INFO: namespace deployment-3084 deletion completed in 6.194786444s

• [SLOW TEST:23.491 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:19:37.831: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name projected-configmap-test-volume-baccd43e-af20-4734-8bc6-c229408ed72e
STEP: Creating a pod to test consume configMaps
Jan  6 14:19:38.071: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4ad2d90d-98c0-4733-9b75-6189fd7cb23e" in namespace "projected-1536" to be "success or failure"
Jan  6 14:19:38.080: INFO: Pod "pod-projected-configmaps-4ad2d90d-98c0-4733-9b75-6189fd7cb23e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.575655ms
Jan  6 14:19:40.113: INFO: Pod "pod-projected-configmaps-4ad2d90d-98c0-4733-9b75-6189fd7cb23e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041906068s
Jan  6 14:19:42.171: INFO: Pod "pod-projected-configmaps-4ad2d90d-98c0-4733-9b75-6189fd7cb23e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09965705s
Jan  6 14:19:44.177: INFO: Pod "pod-projected-configmaps-4ad2d90d-98c0-4733-9b75-6189fd7cb23e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.105735385s
Jan  6 14:19:46.184: INFO: Pod "pod-projected-configmaps-4ad2d90d-98c0-4733-9b75-6189fd7cb23e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.113018235s
STEP: Saw pod success
Jan  6 14:19:46.185: INFO: Pod "pod-projected-configmaps-4ad2d90d-98c0-4733-9b75-6189fd7cb23e" satisfied condition "success or failure"
Jan  6 14:19:46.188: INFO: Trying to get logs from node iruya-node pod pod-projected-configmaps-4ad2d90d-98c0-4733-9b75-6189fd7cb23e container projected-configmap-volume-test: 
STEP: delete the pod
Jan  6 14:19:46.283: INFO: Waiting for pod pod-projected-configmaps-4ad2d90d-98c0-4733-9b75-6189fd7cb23e to disappear
Jan  6 14:19:46.295: INFO: Pod pod-projected-configmaps-4ad2d90d-98c0-4733-9b75-6189fd7cb23e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:19:46.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1536" for this suite.
Jan  6 14:19:52.353: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:19:52.478: INFO: namespace projected-1536 deletion completed in 6.170089642s

• [SLOW TEST:14.647 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:19:52.479: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name secret-emptykey-test-9cde6bb1-6e19-4d48-872b-8cc709be5e31
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:19:52.599: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7176" for this suite.
Jan  6 14:19:58.629: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:19:58.777: INFO: namespace secrets-7176 deletion completed in 6.173199542s

• [SLOW TEST:6.298 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:19:58.778: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  6 14:19:58.854: INFO: Waiting up to 5m0s for pod "pod-b660f619-0912-4312-a5f1-c4b5255bc0c8" in namespace "emptydir-9768" to be "success or failure"
Jan  6 14:19:58.876: INFO: Pod "pod-b660f619-0912-4312-a5f1-c4b5255bc0c8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.666025ms
Jan  6 14:20:00.887: INFO: Pod "pod-b660f619-0912-4312-a5f1-c4b5255bc0c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032613552s
Jan  6 14:20:02.901: INFO: Pod "pod-b660f619-0912-4312-a5f1-c4b5255bc0c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04690808s
Jan  6 14:20:04.922: INFO: Pod "pod-b660f619-0912-4312-a5f1-c4b5255bc0c8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067745528s
Jan  6 14:20:06.936: INFO: Pod "pod-b660f619-0912-4312-a5f1-c4b5255bc0c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082234712s
STEP: Saw pod success
Jan  6 14:20:06.936: INFO: Pod "pod-b660f619-0912-4312-a5f1-c4b5255bc0c8" satisfied condition "success or failure"
Jan  6 14:20:06.943: INFO: Trying to get logs from node iruya-node pod pod-b660f619-0912-4312-a5f1-c4b5255bc0c8 container test-container: 
STEP: delete the pod
Jan  6 14:20:07.069: INFO: Waiting for pod pod-b660f619-0912-4312-a5f1-c4b5255bc0c8 to disappear
Jan  6 14:20:07.080: INFO: Pod pod-b660f619-0912-4312-a5f1-c4b5255bc0c8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:20:07.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9768" for this suite.
Jan  6 14:20:13.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:20:13.261: INFO: namespace emptydir-9768 deletion completed in 6.175711288s

• [SLOW TEST:14.483 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:20:13.261: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4561
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan  6 14:20:13.416: INFO: Found 0 stateful pods, waiting for 3
Jan  6 14:20:23.428: INFO: Found 2 stateful pods, waiting for 3
Jan  6 14:20:33.428: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 14:20:33.428: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 14:20:33.428: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  6 14:20:43.427: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 14:20:43.427: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 14:20:43.427: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  6 14:20:43.460: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan  6 14:20:53.552: INFO: Updating stateful set ss2
Jan  6 14:20:53.616: INFO: Waiting for Pod statefulset-4561/ss2-2 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
STEP: Restoring Pods to the correct revision when they are deleted
Jan  6 14:21:04.217: INFO: Found 2 stateful pods, waiting for 3
Jan  6 14:21:14.254: INFO: Found 2 stateful pods, waiting for 3
Jan  6 14:21:24.232: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 14:21:24.232: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 14:21:24.232: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan  6 14:21:24.259: INFO: Updating stateful set ss2
Jan  6 14:21:24.309: INFO: Waiting for Pod statefulset-4561/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  6 14:21:34.324: INFO: Waiting for Pod statefulset-4561/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  6 14:21:44.343: INFO: Updating stateful set ss2
Jan  6 14:21:44.374: INFO: Waiting for StatefulSet statefulset-4561/ss2 to complete update
Jan  6 14:21:44.374: INFO: Waiting for Pod statefulset-4561/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  6 14:21:54.386: INFO: Waiting for StatefulSet statefulset-4561/ss2 to complete update
Jan  6 14:21:54.386: INFO: Waiting for Pod statefulset-4561/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  6 14:22:04.390: INFO: Deleting all statefulset in ns statefulset-4561
Jan  6 14:22:04.395: INFO: Scaling statefulset ss2 to 0
Jan  6 14:22:34.466: INFO: Waiting for statefulset status.replicas updated to 0
Jan  6 14:22:34.471: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:22:34.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4561" for this suite.
Jan  6 14:22:42.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:22:42.723: INFO: namespace statefulset-4561 deletion completed in 8.209081086s

• [SLOW TEST:149.462 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:22:42.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-202e8bf8-4084-4c0a-a407-b526042a7592
STEP: Creating a pod to test consume secrets
Jan  6 14:22:42.845: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d24b7737-fa0c-4a60-8790-403ed8074f3f" in namespace "projected-1146" to be "success or failure"
Jan  6 14:22:42.860: INFO: Pod "pod-projected-secrets-d24b7737-fa0c-4a60-8790-403ed8074f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 14.777616ms
Jan  6 14:22:44.874: INFO: Pod "pod-projected-secrets-d24b7737-fa0c-4a60-8790-403ed8074f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028813263s
Jan  6 14:22:46.884: INFO: Pod "pod-projected-secrets-d24b7737-fa0c-4a60-8790-403ed8074f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038536593s
Jan  6 14:22:48.897: INFO: Pod "pod-projected-secrets-d24b7737-fa0c-4a60-8790-403ed8074f3f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051033509s
Jan  6 14:22:50.905: INFO: Pod "pod-projected-secrets-d24b7737-fa0c-4a60-8790-403ed8074f3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05904056s
STEP: Saw pod success
Jan  6 14:22:50.905: INFO: Pod "pod-projected-secrets-d24b7737-fa0c-4a60-8790-403ed8074f3f" satisfied condition "success or failure"
Jan  6 14:22:50.911: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-d24b7737-fa0c-4a60-8790-403ed8074f3f container projected-secret-volume-test: 
STEP: delete the pod
Jan  6 14:22:50.990: INFO: Waiting for pod pod-projected-secrets-d24b7737-fa0c-4a60-8790-403ed8074f3f to disappear
Jan  6 14:22:50.996: INFO: Pod pod-projected-secrets-d24b7737-fa0c-4a60-8790-403ed8074f3f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:22:50.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1146" for this suite.
Jan  6 14:22:57.092: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:22:57.200: INFO: namespace projected-1146 deletion completed in 6.128915837s

• [SLOW TEST:14.477 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:22:57.201: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  6 14:23:05.517: INFO: Waiting up to 5m0s for pod "client-envvars-a62b6f7b-6f7b-46a8-b93e-9c2d0d107909" in namespace "pods-6525" to be "success or failure"
Jan  6 14:23:05.551: INFO: Pod "client-envvars-a62b6f7b-6f7b-46a8-b93e-9c2d0d107909": Phase="Pending", Reason="", readiness=false. Elapsed: 33.937699ms
Jan  6 14:23:07.561: INFO: Pod "client-envvars-a62b6f7b-6f7b-46a8-b93e-9c2d0d107909": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044340694s
Jan  6 14:23:09.574: INFO: Pod "client-envvars-a62b6f7b-6f7b-46a8-b93e-9c2d0d107909": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05677769s
Jan  6 14:23:11.587: INFO: Pod "client-envvars-a62b6f7b-6f7b-46a8-b93e-9c2d0d107909": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069912876s
Jan  6 14:23:13.595: INFO: Pod "client-envvars-a62b6f7b-6f7b-46a8-b93e-9c2d0d107909": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.078281911s
STEP: Saw pod success
Jan  6 14:23:13.595: INFO: Pod "client-envvars-a62b6f7b-6f7b-46a8-b93e-9c2d0d107909" satisfied condition "success or failure"
Jan  6 14:23:13.601: INFO: Trying to get logs from node iruya-node pod client-envvars-a62b6f7b-6f7b-46a8-b93e-9c2d0d107909 container env3cont: 
STEP: delete the pod
Jan  6 14:23:13.705: INFO: Waiting for pod client-envvars-a62b6f7b-6f7b-46a8-b93e-9c2d0d107909 to disappear
Jan  6 14:23:13.811: INFO: Pod client-envvars-a62b6f7b-6f7b-46a8-b93e-9c2d0d107909 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:23:13.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6525" for this suite.
Jan  6 14:23:57.874: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:23:58.045: INFO: namespace pods-6525 deletion completed in 44.221217476s

• [SLOW TEST:60.844 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:23:58.045: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap that has name configmap-test-emptyKey-9402d86f-619e-4993-819f-96da8d2c144d
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:23:58.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3432" for this suite.
Jan  6 14:24:04.145: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:24:04.253: INFO: namespace configmap-3432 deletion completed in 6.129685224s

• [SLOW TEST:6.209 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:24:04.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan  6 14:24:04.359: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-882,SelfLink:/api/v1/namespaces/watch-882/configmaps/e2e-watch-test-watch-closed,UID:4b494ee4-3af9-4e71-b58d-9bd460ccd560,ResourceVersion:19532120,Generation:0,CreationTimestamp:2020-01-06 14:24:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  6 14:24:04.359: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-882,SelfLink:/api/v1/namespaces/watch-882/configmaps/e2e-watch-test-watch-closed,UID:4b494ee4-3af9-4e71-b58d-9bd460ccd560,ResourceVersion:19532121,Generation:0,CreationTimestamp:2020-01-06 14:24:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan  6 14:24:04.395: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-882,SelfLink:/api/v1/namespaces/watch-882/configmaps/e2e-watch-test-watch-closed,UID:4b494ee4-3af9-4e71-b58d-9bd460ccd560,ResourceVersion:19532122,Generation:0,CreationTimestamp:2020-01-06 14:24:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  6 14:24:04.395: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-watch-closed,GenerateName:,Namespace:watch-882,SelfLink:/api/v1/namespaces/watch-882/configmaps/e2e-watch-test-watch-closed,UID:4b494ee4-3af9-4e71-b58d-9bd460ccd560,ResourceVersion:19532123,Generation:0,CreationTimestamp:2020-01-06 14:24:04 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: watch-closed-and-restarted,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:24:04.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-882" for this suite.
Jan  6 14:24:10.416: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:24:10.524: INFO: namespace watch-882 deletion completed in 6.125341959s

• [SLOW TEST:6.270 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:24:10.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1292
STEP: creating an rc
Jan  6 14:24:10.628: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6801'
Jan  6 14:24:11.058: INFO: stderr: ""
Jan  6 14:24:11.058: INFO: stdout: "replicationcontroller/redis-master created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Waiting for Redis master to start.
Jan  6 14:24:12.071: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:24:12.071: INFO: Found 0 / 1
Jan  6 14:24:13.071: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:24:13.071: INFO: Found 0 / 1
Jan  6 14:24:14.087: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:24:14.087: INFO: Found 0 / 1
Jan  6 14:24:15.070: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:24:15.070: INFO: Found 0 / 1
Jan  6 14:24:16.071: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:24:16.071: INFO: Found 0 / 1
Jan  6 14:24:17.086: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:24:17.086: INFO: Found 0 / 1
Jan  6 14:24:18.065: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:24:18.065: INFO: Found 0 / 1
Jan  6 14:24:19.067: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:24:19.067: INFO: Found 1 / 1
Jan  6 14:24:19.067: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  6 14:24:19.072: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:24:19.072: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
STEP: checking for a matching strings
Jan  6 14:24:19.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qlzfj redis-master --namespace=kubectl-6801'
Jan  6 14:24:19.291: INFO: stderr: ""
Jan  6 14:24:19.291: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 06 Jan 14:24:17.499 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Jan 14:24:17.499 # Server started, Redis version 3.2.12\n1:M 06 Jan 14:24:17.499 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Jan 14:24:17.500 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log lines
Jan  6 14:24:19.291: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qlzfj redis-master --namespace=kubectl-6801 --tail=1'
Jan  6 14:24:19.441: INFO: stderr: ""
Jan  6 14:24:19.441: INFO: stdout: "1:M 06 Jan 14:24:17.500 * The server is now ready to accept connections on port 6379\n"
STEP: limiting log bytes
Jan  6 14:24:19.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qlzfj redis-master --namespace=kubectl-6801 --limit-bytes=1'
Jan  6 14:24:19.608: INFO: stderr: ""
Jan  6 14:24:19.608: INFO: stdout: " "
STEP: exposing timestamps
Jan  6 14:24:19.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qlzfj redis-master --namespace=kubectl-6801 --tail=1 --timestamps'
Jan  6 14:24:19.719: INFO: stderr: ""
Jan  6 14:24:19.719: INFO: stdout: "2020-01-06T14:24:17.501065811Z 1:M 06 Jan 14:24:17.500 * The server is now ready to accept connections on port 6379\n"
STEP: restricting to a time range
Jan  6 14:24:22.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qlzfj redis-master --namespace=kubectl-6801 --since=1s'
Jan  6 14:24:22.489: INFO: stderr: ""
Jan  6 14:24:22.489: INFO: stdout: ""
Jan  6 14:24:22.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-qlzfj redis-master --namespace=kubectl-6801 --since=24h'
Jan  6 14:24:22.693: INFO: stderr: ""
Jan  6 14:24:22.693: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 06 Jan 14:24:17.499 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Jan 14:24:17.499 # Server started, Redis version 3.2.12\n1:M 06 Jan 14:24:17.499 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Jan 14:24:17.500 * The server is now ready to accept connections on port 6379\n"
[AfterEach] [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1298
STEP: using delete to clean up resources
Jan  6 14:24:22.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6801'
Jan  6 14:24:22.835: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  6 14:24:22.835: INFO: stdout: "replicationcontroller \"redis-master\" force deleted\n"
Jan  6 14:24:22.836: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=nginx --no-headers --namespace=kubectl-6801'
Jan  6 14:24:22.944: INFO: stderr: "No resources found.\n"
Jan  6 14:24:22.944: INFO: stdout: ""
Jan  6 14:24:22.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=nginx --namespace=kubectl-6801 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  6 14:24:23.095: INFO: stderr: ""
Jan  6 14:24:23.095: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:24:23.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6801" for this suite.
Jan  6 14:24:45.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:24:45.278: INFO: namespace kubectl-6801 deletion completed in 22.180127035s

• [SLOW TEST:34.754 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:24:45.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:24:53.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-9885" for this suite.
Jan  6 14:24:59.605: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:24:59.789: INFO: namespace emptydir-wrapper-9885 deletion completed in 6.213197629s

• [SLOW TEST:14.510 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:24:59.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan  6 14:25:18.071: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8415 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 14:25:18.071: INFO: >>> kubeConfig: /root/.kube/config
I0106 14:25:18.152026       8 log.go:172] (0xc002e87760) (0xc002051540) Create stream
I0106 14:25:18.152128       8 log.go:172] (0xc002e87760) (0xc002051540) Stream added, broadcasting: 1
I0106 14:25:18.161679       8 log.go:172] (0xc002e87760) Reply frame received for 1
I0106 14:25:18.161721       8 log.go:172] (0xc002e87760) (0xc002ad1f40) Create stream
I0106 14:25:18.161731       8 log.go:172] (0xc002e87760) (0xc002ad1f40) Stream added, broadcasting: 3
I0106 14:25:18.167273       8 log.go:172] (0xc002e87760) Reply frame received for 3
I0106 14:25:18.167315       8 log.go:172] (0xc002e87760) (0xc000524f00) Create stream
I0106 14:25:18.167330       8 log.go:172] (0xc002e87760) (0xc000524f00) Stream added, broadcasting: 5
I0106 14:25:18.172044       8 log.go:172] (0xc002e87760) Reply frame received for 5
I0106 14:25:18.297048       8 log.go:172] (0xc002e87760) Data frame received for 3
I0106 14:25:18.297112       8 log.go:172] (0xc002ad1f40) (3) Data frame handling
I0106 14:25:18.297132       8 log.go:172] (0xc002ad1f40) (3) Data frame sent
I0106 14:25:18.551287       8 log.go:172] (0xc002e87760) Data frame received for 1
I0106 14:25:18.551439       8 log.go:172] (0xc002e87760) (0xc002ad1f40) Stream removed, broadcasting: 3
I0106 14:25:18.551537       8 log.go:172] (0xc002051540) (1) Data frame handling
I0106 14:25:18.551574       8 log.go:172] (0xc002051540) (1) Data frame sent
I0106 14:25:18.551635       8 log.go:172] (0xc002e87760) (0xc000524f00) Stream removed, broadcasting: 5
I0106 14:25:18.551697       8 log.go:172] (0xc002e87760) (0xc002051540) Stream removed, broadcasting: 1
I0106 14:25:18.551771       8 log.go:172] (0xc002e87760) Go away received
I0106 14:25:18.552235       8 log.go:172] (0xc002e87760) (0xc002051540) Stream removed, broadcasting: 1
I0106 14:25:18.552255       8 log.go:172] (0xc002e87760) (0xc002ad1f40) Stream removed, broadcasting: 3
I0106 14:25:18.552265       8 log.go:172] (0xc002e87760) (0xc000524f00) Stream removed, broadcasting: 5
Jan  6 14:25:18.552: INFO: Exec stderr: ""
Jan  6 14:25:18.552: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8415 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 14:25:18.552: INFO: >>> kubeConfig: /root/.kube/config
I0106 14:25:18.690835       8 log.go:172] (0xc002d6d550) (0xc002a2a820) Create stream
I0106 14:25:18.691275       8 log.go:172] (0xc002d6d550) (0xc002a2a820) Stream added, broadcasting: 1
I0106 14:25:18.708048       8 log.go:172] (0xc002d6d550) Reply frame received for 1
I0106 14:25:18.708150       8 log.go:172] (0xc002d6d550) (0xc002051680) Create stream
I0106 14:25:18.708163       8 log.go:172] (0xc002d6d550) (0xc002051680) Stream added, broadcasting: 3
I0106 14:25:18.713808       8 log.go:172] (0xc002d6d550) Reply frame received for 3
I0106 14:25:18.713828       8 log.go:172] (0xc002d6d550) (0xc002a2a8c0) Create stream
I0106 14:25:18.713836       8 log.go:172] (0xc002d6d550) (0xc002a2a8c0) Stream added, broadcasting: 5
I0106 14:25:18.716871       8 log.go:172] (0xc002d6d550) Reply frame received for 5
I0106 14:25:18.876485       8 log.go:172] (0xc002d6d550) Data frame received for 3
I0106 14:25:18.876670       8 log.go:172] (0xc002051680) (3) Data frame handling
I0106 14:25:18.876701       8 log.go:172] (0xc002051680) (3) Data frame sent
I0106 14:25:19.045682       8 log.go:172] (0xc002d6d550) Data frame received for 1
I0106 14:25:19.045950       8 log.go:172] (0xc002a2a820) (1) Data frame handling
I0106 14:25:19.045974       8 log.go:172] (0xc002a2a820) (1) Data frame sent
I0106 14:25:19.046003       8 log.go:172] (0xc002d6d550) (0xc002a2a820) Stream removed, broadcasting: 1
I0106 14:25:19.046163       8 log.go:172] (0xc002d6d550) (0xc002051680) Stream removed, broadcasting: 3
I0106 14:25:19.046196       8 log.go:172] (0xc002d6d550) (0xc002a2a8c0) Stream removed, broadcasting: 5
I0106 14:25:19.046242       8 log.go:172] (0xc002d6d550) Go away received
I0106 14:25:19.046401       8 log.go:172] (0xc002d6d550) (0xc002a2a820) Stream removed, broadcasting: 1
I0106 14:25:19.046412       8 log.go:172] (0xc002d6d550) (0xc002051680) Stream removed, broadcasting: 3
I0106 14:25:19.046424       8 log.go:172] (0xc002d6d550) (0xc002a2a8c0) Stream removed, broadcasting: 5
Jan  6 14:25:19.046: INFO: Exec stderr: ""
Jan  6 14:25:19.046: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8415 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 14:25:19.046: INFO: >>> kubeConfig: /root/.kube/config
I0106 14:25:19.102654       8 log.go:172] (0xc002fde790) (0xc002051a40) Create stream
I0106 14:25:19.102703       8 log.go:172] (0xc002fde790) (0xc002051a40) Stream added, broadcasting: 1
I0106 14:25:19.111315       8 log.go:172] (0xc002fde790) Reply frame received for 1
I0106 14:25:19.111368       8 log.go:172] (0xc002fde790) (0xc002a2a960) Create stream
I0106 14:25:19.111376       8 log.go:172] (0xc002fde790) (0xc002a2a960) Stream added, broadcasting: 3
I0106 14:25:19.113127       8 log.go:172] (0xc002fde790) Reply frame received for 3
I0106 14:25:19.113163       8 log.go:172] (0xc002fde790) (0xc0015e4000) Create stream
I0106 14:25:19.113172       8 log.go:172] (0xc002fde790) (0xc0015e4000) Stream added, broadcasting: 5
I0106 14:25:19.114126       8 log.go:172] (0xc002fde790) Reply frame received for 5
I0106 14:25:19.201131       8 log.go:172] (0xc002fde790) Data frame received for 3
I0106 14:25:19.201156       8 log.go:172] (0xc002a2a960) (3) Data frame handling
I0106 14:25:19.201169       8 log.go:172] (0xc002a2a960) (3) Data frame sent
I0106 14:25:19.335474       8 log.go:172] (0xc002fde790) Data frame received for 1
I0106 14:25:19.335620       8 log.go:172] (0xc002fde790) (0xc0015e4000) Stream removed, broadcasting: 5
I0106 14:25:19.335902       8 log.go:172] (0xc002051a40) (1) Data frame handling
I0106 14:25:19.335942       8 log.go:172] (0xc002051a40) (1) Data frame sent
I0106 14:25:19.336017       8 log.go:172] (0xc002fde790) (0xc002a2a960) Stream removed, broadcasting: 3
I0106 14:25:19.336075       8 log.go:172] (0xc002fde790) (0xc002051a40) Stream removed, broadcasting: 1
I0106 14:25:19.336102       8 log.go:172] (0xc002fde790) Go away received
I0106 14:25:19.336465       8 log.go:172] (0xc002fde790) (0xc002051a40) Stream removed, broadcasting: 1
I0106 14:25:19.336476       8 log.go:172] (0xc002fde790) (0xc002a2a960) Stream removed, broadcasting: 3
I0106 14:25:19.336511       8 log.go:172] (0xc002fde790) (0xc0015e4000) Stream removed, broadcasting: 5
Jan  6 14:25:19.336: INFO: Exec stderr: ""
Jan  6 14:25:19.336: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8415 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 14:25:19.336: INFO: >>> kubeConfig: /root/.kube/config
I0106 14:25:19.397231       8 log.go:172] (0xc002d5c580) (0xc002a2b040) Create stream
I0106 14:25:19.397285       8 log.go:172] (0xc002d5c580) (0xc002a2b040) Stream added, broadcasting: 1
I0106 14:25:19.403755       8 log.go:172] (0xc002d5c580) Reply frame received for 1
I0106 14:25:19.403787       8 log.go:172] (0xc002d5c580) (0xc002051ae0) Create stream
I0106 14:25:19.403800       8 log.go:172] (0xc002d5c580) (0xc002051ae0) Stream added, broadcasting: 3
I0106 14:25:19.404875       8 log.go:172] (0xc002d5c580) Reply frame received for 3
I0106 14:25:19.404896       8 log.go:172] (0xc002d5c580) (0xc002a2b180) Create stream
I0106 14:25:19.404905       8 log.go:172] (0xc002d5c580) (0xc002a2b180) Stream added, broadcasting: 5
I0106 14:25:19.406931       8 log.go:172] (0xc002d5c580) Reply frame received for 5
I0106 14:25:19.489486       8 log.go:172] (0xc002d5c580) Data frame received for 3
I0106 14:25:19.489558       8 log.go:172] (0xc002051ae0) (3) Data frame handling
I0106 14:25:19.489589       8 log.go:172] (0xc002051ae0) (3) Data frame sent
I0106 14:25:19.573558       8 log.go:172] (0xc002d5c580) Data frame received for 1
I0106 14:25:19.573602       8 log.go:172] (0xc002d5c580) (0xc002051ae0) Stream removed, broadcasting: 3
I0106 14:25:19.573641       8 log.go:172] (0xc002a2b040) (1) Data frame handling
I0106 14:25:19.573664       8 log.go:172] (0xc002a2b040) (1) Data frame sent
I0106 14:25:19.573673       8 log.go:172] (0xc002d5c580) (0xc002a2b040) Stream removed, broadcasting: 1
I0106 14:25:19.575059       8 log.go:172] (0xc002d5c580) (0xc002a2b180) Stream removed, broadcasting: 5
I0106 14:25:19.575084       8 log.go:172] (0xc002d5c580) Go away received
I0106 14:25:19.575173       8 log.go:172] (0xc002d5c580) (0xc002a2b040) Stream removed, broadcasting: 1
I0106 14:25:19.575204       8 log.go:172] (0xc002d5c580) (0xc002051ae0) Stream removed, broadcasting: 3
I0106 14:25:19.575219       8 log.go:172] (0xc002d5c580) (0xc002a2b180) Stream removed, broadcasting: 5
Jan  6 14:25:19.575: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan  6 14:25:19.575: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8415 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 14:25:19.575: INFO: >>> kubeConfig: /root/.kube/config
I0106 14:25:19.631311       8 log.go:172] (0xc000d6d600) (0xc002fb0280) Create stream
I0106 14:25:19.631372       8 log.go:172] (0xc000d6d600) (0xc002fb0280) Stream added, broadcasting: 1
I0106 14:25:19.639939       8 log.go:172] (0xc000d6d600) Reply frame received for 1
I0106 14:25:19.639965       8 log.go:172] (0xc000d6d600) (0xc0005250e0) Create stream
I0106 14:25:19.639972       8 log.go:172] (0xc000d6d600) (0xc0005250e0) Stream added, broadcasting: 3
I0106 14:25:19.641761       8 log.go:172] (0xc000d6d600) Reply frame received for 3
I0106 14:25:19.641782       8 log.go:172] (0xc000d6d600) (0xc002fb0320) Create stream
I0106 14:25:19.641794       8 log.go:172] (0xc000d6d600) (0xc002fb0320) Stream added, broadcasting: 5
I0106 14:25:19.643042       8 log.go:172] (0xc000d6d600) Reply frame received for 5
I0106 14:25:19.748869       8 log.go:172] (0xc000d6d600) Data frame received for 3
I0106 14:25:19.749005       8 log.go:172] (0xc0005250e0) (3) Data frame handling
I0106 14:25:19.749035       8 log.go:172] (0xc0005250e0) (3) Data frame sent
I0106 14:25:19.869957       8 log.go:172] (0xc000d6d600) Data frame received for 1
I0106 14:25:19.870020       8 log.go:172] (0xc002fb0280) (1) Data frame handling
I0106 14:25:19.870156       8 log.go:172] (0xc000d6d600) (0xc0005250e0) Stream removed, broadcasting: 3
I0106 14:25:19.870219       8 log.go:172] (0xc002fb0280) (1) Data frame sent
I0106 14:25:19.870237       8 log.go:172] (0xc000d6d600) (0xc002fb0320) Stream removed, broadcasting: 5
I0106 14:25:19.870291       8 log.go:172] (0xc000d6d600) (0xc002fb0280) Stream removed, broadcasting: 1
I0106 14:25:19.870301       8 log.go:172] (0xc000d6d600) Go away received
I0106 14:25:19.870577       8 log.go:172] (0xc000d6d600) (0xc002fb0280) Stream removed, broadcasting: 1
I0106 14:25:19.870600       8 log.go:172] (0xc000d6d600) (0xc0005250e0) Stream removed, broadcasting: 3
I0106 14:25:19.870609       8 log.go:172] (0xc000d6d600) (0xc002fb0320) Stream removed, broadcasting: 5
Jan  6 14:25:19.870: INFO: Exec stderr: ""
Jan  6 14:25:19.870: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8415 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 14:25:19.870: INFO: >>> kubeConfig: /root/.kube/config
I0106 14:25:19.927636       8 log.go:172] (0xc002fdfc30) (0xc002051f40) Create stream
I0106 14:25:19.927725       8 log.go:172] (0xc002fdfc30) (0xc002051f40) Stream added, broadcasting: 1
I0106 14:25:19.934289       8 log.go:172] (0xc002fdfc30) Reply frame received for 1
I0106 14:25:19.934347       8 log.go:172] (0xc002fdfc30) (0xc002fb03c0) Create stream
I0106 14:25:19.934371       8 log.go:172] (0xc002fdfc30) (0xc002fb03c0) Stream added, broadcasting: 3
I0106 14:25:19.936415       8 log.go:172] (0xc002fdfc30) Reply frame received for 3
I0106 14:25:19.936467       8 log.go:172] (0xc002fdfc30) (0xc003036000) Create stream
I0106 14:25:19.936477       8 log.go:172] (0xc002fdfc30) (0xc003036000) Stream added, broadcasting: 5
I0106 14:25:19.937932       8 log.go:172] (0xc002fdfc30) Reply frame received for 5
I0106 14:25:20.031723       8 log.go:172] (0xc002fdfc30) Data frame received for 3
I0106 14:25:20.031769       8 log.go:172] (0xc002fb03c0) (3) Data frame handling
I0106 14:25:20.031787       8 log.go:172] (0xc002fb03c0) (3) Data frame sent
I0106 14:25:20.141300       8 log.go:172] (0xc002fdfc30) Data frame received for 1
I0106 14:25:20.141346       8 log.go:172] (0xc002051f40) (1) Data frame handling
I0106 14:25:20.141500       8 log.go:172] (0xc002051f40) (1) Data frame sent
I0106 14:25:20.142104       8 log.go:172] (0xc002fdfc30) (0xc002051f40) Stream removed, broadcasting: 1
I0106 14:25:20.143190       8 log.go:172] (0xc002fdfc30) (0xc002fb03c0) Stream removed, broadcasting: 3
I0106 14:25:20.143250       8 log.go:172] (0xc002fdfc30) (0xc003036000) Stream removed, broadcasting: 5
I0106 14:25:20.143269       8 log.go:172] (0xc002fdfc30) Go away received
I0106 14:25:20.143311       8 log.go:172] (0xc002fdfc30) (0xc002051f40) Stream removed, broadcasting: 1
I0106 14:25:20.143323       8 log.go:172] (0xc002fdfc30) (0xc002fb03c0) Stream removed, broadcasting: 3
I0106 14:25:20.143331       8 log.go:172] (0xc002fdfc30) (0xc003036000) Stream removed, broadcasting: 5
Jan  6 14:25:20.143: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan  6 14:25:20.143: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8415 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 14:25:20.143: INFO: >>> kubeConfig: /root/.kube/config
I0106 14:25:20.216219       8 log.go:172] (0xc002edc0b0) (0xc0015e4320) Create stream
I0106 14:25:20.216254       8 log.go:172] (0xc002edc0b0) (0xc0015e4320) Stream added, broadcasting: 1
I0106 14:25:20.222329       8 log.go:172] (0xc002edc0b0) Reply frame received for 1
I0106 14:25:20.222348       8 log.go:172] (0xc002edc0b0) (0xc000525180) Create stream
I0106 14:25:20.222354       8 log.go:172] (0xc002edc0b0) (0xc000525180) Stream added, broadcasting: 3
I0106 14:25:20.223657       8 log.go:172] (0xc002edc0b0) Reply frame received for 3
I0106 14:25:20.223718       8 log.go:172] (0xc002edc0b0) (0xc003036140) Create stream
I0106 14:25:20.223728       8 log.go:172] (0xc002edc0b0) (0xc003036140) Stream added, broadcasting: 5
I0106 14:25:20.224684       8 log.go:172] (0xc002edc0b0) Reply frame received for 5
I0106 14:25:20.306241       8 log.go:172] (0xc002edc0b0) Data frame received for 3
I0106 14:25:20.306322       8 log.go:172] (0xc000525180) (3) Data frame handling
I0106 14:25:20.306372       8 log.go:172] (0xc000525180) (3) Data frame sent
I0106 14:25:20.417826       8 log.go:172] (0xc002edc0b0) Data frame received for 1
I0106 14:25:20.417952       8 log.go:172] (0xc002edc0b0) (0xc000525180) Stream removed, broadcasting: 3
I0106 14:25:20.418038       8 log.go:172] (0xc0015e4320) (1) Data frame handling
I0106 14:25:20.418066       8 log.go:172] (0xc0015e4320) (1) Data frame sent
I0106 14:25:20.418129       8 log.go:172] (0xc002edc0b0) (0xc003036140) Stream removed, broadcasting: 5
I0106 14:25:20.418172       8 log.go:172] (0xc002edc0b0) (0xc0015e4320) Stream removed, broadcasting: 1
I0106 14:25:20.418205       8 log.go:172] (0xc002edc0b0) Go away received
I0106 14:25:20.418485       8 log.go:172] (0xc002edc0b0) (0xc0015e4320) Stream removed, broadcasting: 1
I0106 14:25:20.418512       8 log.go:172] (0xc002edc0b0) (0xc000525180) Stream removed, broadcasting: 3
I0106 14:25:20.418521       8 log.go:172] (0xc002edc0b0) (0xc003036140) Stream removed, broadcasting: 5
Jan  6 14:25:20.418: INFO: Exec stderr: ""
Jan  6 14:25:20.418: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8415 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 14:25:20.418: INFO: >>> kubeConfig: /root/.kube/config
I0106 14:25:20.495924       8 log.go:172] (0xc002ac5760) (0xc000525680) Create stream
I0106 14:25:20.496108       8 log.go:172] (0xc002ac5760) (0xc000525680) Stream added, broadcasting: 1
I0106 14:25:20.504766       8 log.go:172] (0xc002ac5760) Reply frame received for 1
I0106 14:25:20.504815       8 log.go:172] (0xc002ac5760) (0xc0015e4460) Create stream
I0106 14:25:20.504827       8 log.go:172] (0xc002ac5760) (0xc0015e4460) Stream added, broadcasting: 3
I0106 14:25:20.507636       8 log.go:172] (0xc002ac5760) Reply frame received for 3
I0106 14:25:20.507655       8 log.go:172] (0xc002ac5760) (0xc0015e4500) Create stream
I0106 14:25:20.507663       8 log.go:172] (0xc002ac5760) (0xc0015e4500) Stream added, broadcasting: 5
I0106 14:25:20.510188       8 log.go:172] (0xc002ac5760) Reply frame received for 5
I0106 14:25:20.670657       8 log.go:172] (0xc002ac5760) Data frame received for 3
I0106 14:25:20.670721       8 log.go:172] (0xc0015e4460) (3) Data frame handling
I0106 14:25:20.670736       8 log.go:172] (0xc0015e4460) (3) Data frame sent
I0106 14:25:20.784514       8 log.go:172] (0xc002ac5760) Data frame received for 1
I0106 14:25:20.784573       8 log.go:172] (0xc000525680) (1) Data frame handling
I0106 14:25:20.784609       8 log.go:172] (0xc000525680) (1) Data frame sent
I0106 14:25:20.784890       8 log.go:172] (0xc002ac5760) (0xc000525680) Stream removed, broadcasting: 1
I0106 14:25:20.784942       8 log.go:172] (0xc002ac5760) (0xc0015e4460) Stream removed, broadcasting: 3
I0106 14:25:20.784989       8 log.go:172] (0xc002ac5760) (0xc0015e4500) Stream removed, broadcasting: 5
I0106 14:25:20.785066       8 log.go:172] (0xc002ac5760) Go away received
I0106 14:25:20.785230       8 log.go:172] (0xc002ac5760) (0xc000525680) Stream removed, broadcasting: 1
I0106 14:25:20.785242       8 log.go:172] (0xc002ac5760) (0xc0015e4460) Stream removed, broadcasting: 3
I0106 14:25:20.785250       8 log.go:172] (0xc002ac5760) (0xc0015e4500) Stream removed, broadcasting: 5
Jan  6 14:25:20.785: INFO: Exec stderr: ""
Jan  6 14:25:20.785: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-8415 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 14:25:20.785: INFO: >>> kubeConfig: /root/.kube/config
I0106 14:25:20.839566       8 log.go:172] (0xc002edcfd0) (0xc0015e4960) Create stream
I0106 14:25:20.839599       8 log.go:172] (0xc002edcfd0) (0xc0015e4960) Stream added, broadcasting: 1
I0106 14:25:20.844048       8 log.go:172] (0xc002edcfd0) Reply frame received for 1
I0106 14:25:20.844071       8 log.go:172] (0xc002edcfd0) (0xc0030361e0) Create stream
I0106 14:25:20.844081       8 log.go:172] (0xc002edcfd0) (0xc0030361e0) Stream added, broadcasting: 3
I0106 14:25:20.846091       8 log.go:172] (0xc002edcfd0) Reply frame received for 3
I0106 14:25:20.846106       8 log.go:172] (0xc002edcfd0) (0xc003036280) Create stream
I0106 14:25:20.846114       8 log.go:172] (0xc002edcfd0) (0xc003036280) Stream added, broadcasting: 5
I0106 14:25:20.848293       8 log.go:172] (0xc002edcfd0) Reply frame received for 5
I0106 14:25:20.933727       8 log.go:172] (0xc002edcfd0) Data frame received for 3
I0106 14:25:20.933769       8 log.go:172] (0xc0030361e0) (3) Data frame handling
I0106 14:25:20.933786       8 log.go:172] (0xc0030361e0) (3) Data frame sent
I0106 14:25:21.035552       8 log.go:172] (0xc002edcfd0) (0xc0030361e0) Stream removed, broadcasting: 3
I0106 14:25:21.035623       8 log.go:172] (0xc002edcfd0) Data frame received for 1
I0106 14:25:21.035647       8 log.go:172] (0xc002edcfd0) (0xc003036280) Stream removed, broadcasting: 5
I0106 14:25:21.035683       8 log.go:172] (0xc0015e4960) (1) Data frame handling
I0106 14:25:21.035700       8 log.go:172] (0xc0015e4960) (1) Data frame sent
I0106 14:25:21.035719       8 log.go:172] (0xc002edcfd0) (0xc0015e4960) Stream removed, broadcasting: 1
I0106 14:25:21.035734       8 log.go:172] (0xc002edcfd0) Go away received
I0106 14:25:21.036059       8 log.go:172] (0xc002edcfd0) (0xc0015e4960) Stream removed, broadcasting: 1
I0106 14:25:21.036068       8 log.go:172] (0xc002edcfd0) (0xc0030361e0) Stream removed, broadcasting: 3
I0106 14:25:21.036074       8 log.go:172] (0xc002edcfd0) (0xc003036280) Stream removed, broadcasting: 5
Jan  6 14:25:21.036: INFO: Exec stderr: ""
Jan  6 14:25:21.036: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-8415 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 14:25:21.036: INFO: >>> kubeConfig: /root/.kube/config
I0106 14:25:21.098241       8 log.go:172] (0xc002edd8c0) (0xc0015e4dc0) Create stream
I0106 14:25:21.098275       8 log.go:172] (0xc002edd8c0) (0xc0015e4dc0) Stream added, broadcasting: 1
I0106 14:25:21.108818       8 log.go:172] (0xc002edd8c0) Reply frame received for 1
I0106 14:25:21.108907       8 log.go:172] (0xc002edd8c0) (0xc000b061e0) Create stream
I0106 14:25:21.108921       8 log.go:172] (0xc002edd8c0) (0xc000b061e0) Stream added, broadcasting: 3
I0106 14:25:21.110742       8 log.go:172] (0xc002edd8c0) Reply frame received for 3
I0106 14:25:21.110770       8 log.go:172] (0xc002edd8c0) (0xc0020500a0) Create stream
I0106 14:25:21.110779       8 log.go:172] (0xc002edd8c0) (0xc0020500a0) Stream added, broadcasting: 5
I0106 14:25:21.115041       8 log.go:172] (0xc002edd8c0) Reply frame received for 5
I0106 14:25:21.185435       8 log.go:172] (0xc002edd8c0) Data frame received for 3
I0106 14:25:21.185478       8 log.go:172] (0xc000b061e0) (3) Data frame handling
I0106 14:25:21.185501       8 log.go:172] (0xc000b061e0) (3) Data frame sent
I0106 14:25:21.281741       8 log.go:172] (0xc002edd8c0) Data frame received for 1
I0106 14:25:21.281920       8 log.go:172] (0xc002edd8c0) (0xc000b061e0) Stream removed, broadcasting: 3
I0106 14:25:21.282035       8 log.go:172] (0xc0015e4dc0) (1) Data frame handling
I0106 14:25:21.282052       8 log.go:172] (0xc0015e4dc0) (1) Data frame sent
I0106 14:25:21.282060       8 log.go:172] (0xc002edd8c0) (0xc0015e4dc0) Stream removed, broadcasting: 1
I0106 14:25:21.282222       8 log.go:172] (0xc002edd8c0) (0xc0020500a0) Stream removed, broadcasting: 5
I0106 14:25:21.282279       8 log.go:172] (0xc002edd8c0) Go away received
I0106 14:25:21.282401       8 log.go:172] (0xc002edd8c0) (0xc0015e4dc0) Stream removed, broadcasting: 1
I0106 14:25:21.282413       8 log.go:172] (0xc002edd8c0) (0xc000b061e0) Stream removed, broadcasting: 3
I0106 14:25:21.282432       8 log.go:172] (0xc002edd8c0) (0xc0020500a0) Stream removed, broadcasting: 5
Jan  6 14:25:21.282: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:25:21.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-8415" for this suite.
Jan  6 14:26:07.707: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:26:07.844: INFO: namespace e2e-kubelet-etc-hosts-8415 deletion completed in 46.546316768s

• [SLOW TEST:68.055 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:26:07.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating the pod
Jan  6 14:26:16.565: INFO: Successfully updated pod "labelsupdate372a154a-25b6-405e-afbc-6839f21c2484"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:26:18.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9636" for this suite.
Jan  6 14:26:41.008: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:26:41.135: INFO: namespace downward-api-9636 deletion completed in 22.16492448s

• [SLOW TEST:33.290 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:26:41.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be submitted and removed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:26:41.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3481" for this suite.
Jan  6 14:27:03.338: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:27:03.459: INFO: namespace pods-3481 deletion completed in 22.137589825s

• [SLOW TEST:22.323 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should be submitted and removed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:27:03.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-secret-s55v
STEP: Creating a pod to test atomic-volume-subpath
Jan  6 14:27:03.562: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-s55v" in namespace "subpath-8231" to be "success or failure"
Jan  6 14:27:03.636: INFO: Pod "pod-subpath-test-secret-s55v": Phase="Pending", Reason="", readiness=false. Elapsed: 73.934777ms
Jan  6 14:27:05.643: INFO: Pod "pod-subpath-test-secret-s55v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081446618s
Jan  6 14:27:07.649: INFO: Pod "pod-subpath-test-secret-s55v": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087697716s
Jan  6 14:27:09.660: INFO: Pod "pod-subpath-test-secret-s55v": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098411349s
Jan  6 14:27:11.725: INFO: Pod "pod-subpath-test-secret-s55v": Phase="Running", Reason="", readiness=true. Elapsed: 8.163251995s
Jan  6 14:27:13.733: INFO: Pod "pod-subpath-test-secret-s55v": Phase="Running", Reason="", readiness=true. Elapsed: 10.171125549s
Jan  6 14:27:15.743: INFO: Pod "pod-subpath-test-secret-s55v": Phase="Running", Reason="", readiness=true. Elapsed: 12.18100019s
Jan  6 14:27:17.750: INFO: Pod "pod-subpath-test-secret-s55v": Phase="Running", Reason="", readiness=true. Elapsed: 14.188180137s
Jan  6 14:27:19.758: INFO: Pod "pod-subpath-test-secret-s55v": Phase="Running", Reason="", readiness=true. Elapsed: 16.196216187s
Jan  6 14:27:21.771: INFO: Pod "pod-subpath-test-secret-s55v": Phase="Running", Reason="", readiness=true. Elapsed: 18.209458966s
Jan  6 14:27:23.799: INFO: Pod "pod-subpath-test-secret-s55v": Phase="Running", Reason="", readiness=true. Elapsed: 20.237372641s
Jan  6 14:27:25.810: INFO: Pod "pod-subpath-test-secret-s55v": Phase="Running", Reason="", readiness=true. Elapsed: 22.248223266s
Jan  6 14:27:27.821: INFO: Pod "pod-subpath-test-secret-s55v": Phase="Running", Reason="", readiness=true. Elapsed: 24.259283238s
Jan  6 14:27:29.830: INFO: Pod "pod-subpath-test-secret-s55v": Phase="Running", Reason="", readiness=true. Elapsed: 26.268177063s
Jan  6 14:27:31.837: INFO: Pod "pod-subpath-test-secret-s55v": Phase="Running", Reason="", readiness=true. Elapsed: 28.275684297s
Jan  6 14:27:33.855: INFO: Pod "pod-subpath-test-secret-s55v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.293066284s
STEP: Saw pod success
Jan  6 14:27:33.855: INFO: Pod "pod-subpath-test-secret-s55v" satisfied condition "success or failure"
Jan  6 14:27:33.865: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-secret-s55v container test-container-subpath-secret-s55v: 
STEP: delete the pod
Jan  6 14:27:34.037: INFO: Waiting for pod pod-subpath-test-secret-s55v to disappear
Jan  6 14:27:34.043: INFO: Pod pod-subpath-test-secret-s55v no longer exists
STEP: Deleting pod pod-subpath-test-secret-s55v
Jan  6 14:27:34.043: INFO: Deleting pod "pod-subpath-test-secret-s55v" in namespace "subpath-8231"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:27:34.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-8231" for this suite.
Jan  6 14:27:40.078: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:27:40.226: INFO: namespace subpath-8231 deletion completed in 6.174638578s

• [SLOW TEST:36.766 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:27:40.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  6 14:28:08.409: INFO: Container started at 2020-01-06 14:27:46 +0000 UTC, pod became ready at 2020-01-06 14:28:07 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:28:08.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8250" for this suite.
Jan  6 14:28:30.451: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:28:30.631: INFO: namespace container-probe-8250 deletion completed in 22.214629961s

• [SLOW TEST:50.404 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:28:30.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  6 14:28:30.743: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1c27cc2a-36ca-48de-8e27-b49cb3cf8d85" in namespace "downward-api-4449" to be "success or failure"
Jan  6 14:28:30.770: INFO: Pod "downwardapi-volume-1c27cc2a-36ca-48de-8e27-b49cb3cf8d85": Phase="Pending", Reason="", readiness=false. Elapsed: 26.570742ms
Jan  6 14:28:32.777: INFO: Pod "downwardapi-volume-1c27cc2a-36ca-48de-8e27-b49cb3cf8d85": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033544937s
Jan  6 14:28:34.784: INFO: Pod "downwardapi-volume-1c27cc2a-36ca-48de-8e27-b49cb3cf8d85": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040775316s
Jan  6 14:28:36.801: INFO: Pod "downwardapi-volume-1c27cc2a-36ca-48de-8e27-b49cb3cf8d85": Phase="Pending", Reason="", readiness=false. Elapsed: 6.057679221s
Jan  6 14:28:38.807: INFO: Pod "downwardapi-volume-1c27cc2a-36ca-48de-8e27-b49cb3cf8d85": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0633565s
STEP: Saw pod success
Jan  6 14:28:38.807: INFO: Pod "downwardapi-volume-1c27cc2a-36ca-48de-8e27-b49cb3cf8d85" satisfied condition "success or failure"
Jan  6 14:28:38.811: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1c27cc2a-36ca-48de-8e27-b49cb3cf8d85 container client-container: 
STEP: delete the pod
Jan  6 14:28:38.982: INFO: Waiting for pod downwardapi-volume-1c27cc2a-36ca-48de-8e27-b49cb3cf8d85 to disappear
Jan  6 14:28:38.990: INFO: Pod downwardapi-volume-1c27cc2a-36ca-48de-8e27-b49cb3cf8d85 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:28:38.990: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4449" for this suite.
Jan  6 14:28:45.016: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:28:45.105: INFO: namespace downward-api-4449 deletion completed in 6.105228244s

• [SLOW TEST:14.474 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:28:45.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  6 14:28:45.258: INFO: Waiting up to 5m0s for pod "downward-api-c06686de-4204-421a-8cbd-cb0dd2096223" in namespace "downward-api-3234" to be "success or failure"
Jan  6 14:28:45.411: INFO: Pod "downward-api-c06686de-4204-421a-8cbd-cb0dd2096223": Phase="Pending", Reason="", readiness=false. Elapsed: 152.798257ms
Jan  6 14:28:47.422: INFO: Pod "downward-api-c06686de-4204-421a-8cbd-cb0dd2096223": Phase="Pending", Reason="", readiness=false. Elapsed: 2.164148376s
Jan  6 14:28:49.436: INFO: Pod "downward-api-c06686de-4204-421a-8cbd-cb0dd2096223": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178204921s
Jan  6 14:28:51.448: INFO: Pod "downward-api-c06686de-4204-421a-8cbd-cb0dd2096223": Phase="Pending", Reason="", readiness=false. Elapsed: 6.189371029s
Jan  6 14:28:53.459: INFO: Pod "downward-api-c06686de-4204-421a-8cbd-cb0dd2096223": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.200738817s
STEP: Saw pod success
Jan  6 14:28:53.459: INFO: Pod "downward-api-c06686de-4204-421a-8cbd-cb0dd2096223" satisfied condition "success or failure"
Jan  6 14:28:53.464: INFO: Trying to get logs from node iruya-node pod downward-api-c06686de-4204-421a-8cbd-cb0dd2096223 container dapi-container: 
STEP: delete the pod
Jan  6 14:28:53.529: INFO: Waiting for pod downward-api-c06686de-4204-421a-8cbd-cb0dd2096223 to disappear
Jan  6 14:28:53.538: INFO: Pod downward-api-c06686de-4204-421a-8cbd-cb0dd2096223 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:28:53.538: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3234" for this suite.
Jan  6 14:28:59.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:28:59.771: INFO: namespace downward-api-3234 deletion completed in 6.225528629s

• [SLOW TEST:14.665 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:28:59.771: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:72
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  6 14:28:59.952: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan  6 14:29:04.960: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan  6 14:29:06.978: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:66
Jan  6 14:29:07.018: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-deployment,GenerateName:,Namespace:deployment-8662,SelfLink:/apis/apps/v1/namespaces/deployment-8662/deployments/test-cleanup-deployment,UID:af0455ea-0685-4089-9e4f-c528cc799a9f,ResourceVersion:19532809,Generation:1,CreationTimestamp:2020-01-06 14:29:06 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:DeploymentSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{redis gcr.io/kubernetes-e2e-test-images/redis:1.0 [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[],ReadyReplicas:0,CollisionCount:nil,},}

Jan  6 14:29:07.022: INFO: New ReplicaSet of Deployment "test-cleanup-deployment" is nil.
Jan  6 14:29:07.022: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment":
Jan  6 14:29:07.022: INFO: &ReplicaSet{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller,GenerateName:,Namespace:deployment-8662,SelfLink:/apis/apps/v1/namespaces/deployment-8662/replicasets/test-cleanup-controller,UID:78596f4f-5fdb-47c5-b256-7e04ba6583df,ResourceVersion:19532810,Generation:1,CreationTimestamp:2020-01-06 14:28:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 Deployment test-cleanup-deployment af0455ea-0685-4089-9e4f-c528cc799a9f 0xc002347e2f 0xc002347e60}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:ReplicaSetSpec{Replicas:*1,Selector:&k8s_io_apimachinery_pkg_apis_meta_v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: nginx,},MatchExpressions:[],},Template:k8s_io_api_core_v1.PodTemplateSpec{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:,GenerateName:,Namespace:,SelfLink:,UID:,ResourceVersion:,Generation:0,CreationTimestamp:0001-01-01 00:00:00 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:,DeprecatedServiceAccount:,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[],HostAliases:[],PriorityClassName:,Priority:nil,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:nil,PreemptionPolicy:nil,},},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[],},}
Jan  6 14:29:07.055: INFO: Pod "test-cleanup-controller-qvpkv" is available:
&Pod{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:test-cleanup-controller-qvpkv,GenerateName:test-cleanup-controller-,Namespace:deployment-8662,SelfLink:/api/v1/namespaces/deployment-8662/pods/test-cleanup-controller-qvpkv,UID:ea532506-1389-4a65-915d-7230d9aa1e5e,ResourceVersion:19532807,Generation:0,CreationTimestamp:2020-01-06 14:28:59 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{name: cleanup-pod,pod: nginx,},Annotations:map[string]string{},OwnerReferences:[{apps/v1 ReplicaSet test-cleanup-controller 78596f4f-5fdb-47c5-b256-7e04ba6583df 0xc0026dd337 0xc0026dd338}],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Spec:PodSpec{Volumes:[{default-token-mcwcc {nil nil nil nil nil SecretVolumeSource{SecretName:default-token-mcwcc,Items:[],DefaultMode:*420,Optional:nil,} nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil nil}}],Containers:[{nginx docker.io/library/nginx:1.14-alpine [] []  [] [] [] {map[] map[]} [{default-token-mcwcc true /var/run/secrets/kubernetes.io/serviceaccount   }] [] nil nil nil /dev/termination-log File IfNotPresent nil false false false}],RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:iruya-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[],WindowsOptions:nil,},ImagePullSecrets:[],Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[],AutomountServiceAccountToken:nil,Tolerations:[{node.kubernetes.io/not-ready Exists  NoExecute 0xc0026dd570} {node.kubernetes.io/unreachable Exists  NoExecute 0xc0026dd590}],HostAliases:[],PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[],RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,},Status:PodStatus{Phase:Running,Conditions:[{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 14:29:00 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 14:29:06 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 14:29:06 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-06 14:28:59 +0000 UTC  }],Message:,Reason:,HostIP:10.96.3.65,PodIP:10.44.0.1,StartTime:2020-01-06 14:29:00 +0000 UTC,ContainerStatuses:[{nginx {nil ContainerStateRunning{StartedAt:2020-01-06 14:29:05 +0000 UTC,} nil} {nil nil nil} true 0 nginx:1.14-alpine docker-pullable://nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 docker://dfe8892cb0c21e6315cdbad7c9140879531897b145ece834a66cb8035fb691dc}],QOSClass:BestEffort,InitContainerStatuses:[],NominatedNodeName:,},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:29:07.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8662" for this suite.
Jan  6 14:29:13.200: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:29:13.264: INFO: namespace deployment-8662 deletion completed in 6.09890363s

• [SLOW TEST:13.493 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:29:13.264: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-0d4827e8-ee66-4b29-afdb-ff55c577e81d
STEP: Creating a pod to test consume configMaps
Jan  6 14:29:13.530: INFO: Waiting up to 5m0s for pod "pod-configmaps-0d76717f-4725-4912-afed-6248349cb6be" in namespace "configmap-1555" to be "success or failure"
Jan  6 14:29:13.649: INFO: Pod "pod-configmaps-0d76717f-4725-4912-afed-6248349cb6be": Phase="Pending", Reason="", readiness=false. Elapsed: 118.498253ms
Jan  6 14:29:15.694: INFO: Pod "pod-configmaps-0d76717f-4725-4912-afed-6248349cb6be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.163722149s
Jan  6 14:29:17.707: INFO: Pod "pod-configmaps-0d76717f-4725-4912-afed-6248349cb6be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.176540122s
Jan  6 14:29:19.714: INFO: Pod "pod-configmaps-0d76717f-4725-4912-afed-6248349cb6be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183805764s
Jan  6 14:29:21.722: INFO: Pod "pod-configmaps-0d76717f-4725-4912-afed-6248349cb6be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.192066729s
Jan  6 14:29:23.734: INFO: Pod "pod-configmaps-0d76717f-4725-4912-afed-6248349cb6be": Phase="Pending", Reason="", readiness=false. Elapsed: 10.203463358s
Jan  6 14:29:25.743: INFO: Pod "pod-configmaps-0d76717f-4725-4912-afed-6248349cb6be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.212653325s
STEP: Saw pod success
Jan  6 14:29:25.743: INFO: Pod "pod-configmaps-0d76717f-4725-4912-afed-6248349cb6be" satisfied condition "success or failure"
Jan  6 14:29:25.747: INFO: Trying to get logs from node iruya-node pod pod-configmaps-0d76717f-4725-4912-afed-6248349cb6be container configmap-volume-test: 
STEP: delete the pod
Jan  6 14:29:25.873: INFO: Waiting for pod pod-configmaps-0d76717f-4725-4912-afed-6248349cb6be to disappear
Jan  6 14:29:25.883: INFO: Pod pod-configmaps-0d76717f-4725-4912-afed-6248349cb6be no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:29:25.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1555" for this suite.
Jan  6 14:29:31.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:29:32.059: INFO: namespace configmap-1555 deletion completed in 6.169013191s

• [SLOW TEST:18.795 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:29:32.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  6 14:29:32.179: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan  6 14:29:35.635: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:29:36.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-557" for this suite.
Jan  6 14:29:42.969: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:29:43.060: INFO: namespace replication-controller-557 deletion completed in 6.144266114s

• [SLOW TEST:11.000 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:29:43.060: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan  6 14:29:43.191: INFO: namespace kubectl-7810
Jan  6 14:29:43.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7810'
Jan  6 14:29:47.313: INFO: stderr: ""
Jan  6 14:29:47.313: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  6 14:29:48.918: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:29:48.918: INFO: Found 0 / 1
Jan  6 14:29:49.322: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:29:49.322: INFO: Found 0 / 1
Jan  6 14:29:50.333: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:29:50.333: INFO: Found 0 / 1
Jan  6 14:29:51.324: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:29:51.324: INFO: Found 0 / 1
Jan  6 14:29:52.325: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:29:52.325: INFO: Found 0 / 1
Jan  6 14:29:53.322: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:29:53.322: INFO: Found 0 / 1
Jan  6 14:29:54.329: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:29:54.329: INFO: Found 0 / 1
Jan  6 14:29:55.329: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:29:55.329: INFO: Found 1 / 1
Jan  6 14:29:55.329: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan  6 14:29:55.333: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 14:29:55.333: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  6 14:29:55.333: INFO: wait on redis-master startup in kubectl-7810 
Jan  6 14:29:55.334: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-tqfl5 redis-master --namespace=kubectl-7810'
Jan  6 14:29:55.568: INFO: stderr: ""
Jan  6 14:29:55.569: INFO: stdout: "                _._                                                  \n           _.-``__ ''-._                                             \n      _.-``    `.  `_.  ''-._           Redis 3.2.12 (35a5711f/0) 64 bit\n  .-`` .-```.  ```\\/    _.,_ ''-._                                   \n (    '      ,       .-`  | `,    )     Running in standalone mode\n |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379\n |    `-._   `._    /     _.-'    |     PID: 1\n  `-._    `-._  `-./  _.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |           http://redis.io        \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n |`-._`-._    `-.__.-'    _.-'_.-'|                                  \n |    `-._`-._        _.-'_.-'    |                                  \n  `-._    `-._`-.__.-'_.-'    _.-'                                   \n      `-._    `-.__.-'    _.-'                                       \n          `-._        _.-'                                           \n              `-.__.-'                                               \n\n1:M 06 Jan 14:29:54.753 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 06 Jan 14:29:54.753 # Server started, Redis version 3.2.12\n1:M 06 Jan 14:29:54.754 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 06 Jan 14:29:54.754 * The server is now ready to accept connections on port 6379\n"
STEP: exposing RC
Jan  6 14:29:55.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7810'
Jan  6 14:29:55.904: INFO: stderr: ""
Jan  6 14:29:55.904: INFO: stdout: "service/rm2 exposed\n"
Jan  6 14:29:55.918: INFO: Service rm2 in namespace kubectl-7810 found.
STEP: exposing service
Jan  6 14:29:57.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7810'
Jan  6 14:29:58.309: INFO: stderr: ""
Jan  6 14:29:58.309: INFO: stdout: "service/rm3 exposed\n"
Jan  6 14:29:58.336: INFO: Service rm3 in namespace kubectl-7810 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:30:00.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7810" for this suite.
Jan  6 14:30:22.428: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:30:22.522: INFO: namespace kubectl-7810 deletion completed in 22.161112996s

• [SLOW TEST:39.462 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:30:22.523: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan  6 14:30:22.660: INFO: Waiting up to 5m0s for pod "pod-3fac88ae-3cc4-423b-93b9-a1ecb6bbd25f" in namespace "emptydir-771" to be "success or failure"
Jan  6 14:30:22.701: INFO: Pod "pod-3fac88ae-3cc4-423b-93b9-a1ecb6bbd25f": Phase="Pending", Reason="", readiness=false. Elapsed: 40.765682ms
Jan  6 14:30:24.708: INFO: Pod "pod-3fac88ae-3cc4-423b-93b9-a1ecb6bbd25f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048013952s
Jan  6 14:30:26.721: INFO: Pod "pod-3fac88ae-3cc4-423b-93b9-a1ecb6bbd25f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0603795s
Jan  6 14:30:28.735: INFO: Pod "pod-3fac88ae-3cc4-423b-93b9-a1ecb6bbd25f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075057752s
Jan  6 14:30:30.747: INFO: Pod "pod-3fac88ae-3cc4-423b-93b9-a1ecb6bbd25f": Phase="Running", Reason="", readiness=true. Elapsed: 8.086338586s
Jan  6 14:30:32.753: INFO: Pod "pod-3fac88ae-3cc4-423b-93b9-a1ecb6bbd25f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092265687s
STEP: Saw pod success
Jan  6 14:30:32.753: INFO: Pod "pod-3fac88ae-3cc4-423b-93b9-a1ecb6bbd25f" satisfied condition "success or failure"
Jan  6 14:30:32.756: INFO: Trying to get logs from node iruya-node pod pod-3fac88ae-3cc4-423b-93b9-a1ecb6bbd25f container test-container: 
STEP: delete the pod
Jan  6 14:30:32.913: INFO: Waiting for pod pod-3fac88ae-3cc4-423b-93b9-a1ecb6bbd25f to disappear
Jan  6 14:30:32.920: INFO: Pod pod-3fac88ae-3cc4-423b-93b9-a1ecb6bbd25f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:30:32.921: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-771" for this suite.
Jan  6 14:30:38.953: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:30:39.099: INFO: namespace emptydir-771 deletion completed in 6.172907062s

• [SLOW TEST:16.576 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:30:39.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-9jg89 in namespace proxy-6385
I0106 14:30:39.303045       8 runners.go:180] Created replication controller with name: proxy-service-9jg89, namespace: proxy-6385, replica count: 1
I0106 14:30:40.354674       8 runners.go:180] proxy-service-9jg89 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0106 14:30:41.355409       8 runners.go:180] proxy-service-9jg89 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0106 14:30:42.356275       8 runners.go:180] proxy-service-9jg89 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0106 14:30:43.356829       8 runners.go:180] proxy-service-9jg89 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0106 14:30:44.357423       8 runners.go:180] proxy-service-9jg89 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0106 14:30:45.358409       8 runners.go:180] proxy-service-9jg89 Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0106 14:30:46.359151       8 runners.go:180] proxy-service-9jg89 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0106 14:30:47.359754       8 runners.go:180] proxy-service-9jg89 Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0106 14:30:48.360456       8 runners.go:180] proxy-service-9jg89 Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  6 14:30:48.369: INFO: setup took 9.152038998s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan  6 14:30:48.402: INFO: (0) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2/proxy/: test (200; 32.318821ms)
Jan  6 14:30:48.402: INFO: (0) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 32.692155ms)
Jan  6 14:30:48.402: INFO: (0) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 32.640208ms)
Jan  6 14:30:48.402: INFO: (0) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname1/proxy/: foo (200; 32.400027ms)
Jan  6 14:30:48.402: INFO: (0) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 32.199496ms)
Jan  6 14:30:48.402: INFO: (0) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname1/proxy/: foo (200; 32.741041ms)
Jan  6 14:30:48.402: INFO: (0) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 32.398589ms)
Jan  6 14:30:48.402: INFO: (0) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:1080/proxy/: test<... (200; 32.43545ms)
Jan  6 14:30:48.403: INFO: (0) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname2/proxy/: bar (200; 34.051585ms)
Jan  6 14:30:48.403: INFO: (0) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname2/proxy/: bar (200; 34.229991ms)
Jan  6 14:30:48.404: INFO: (0) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:1080/proxy/: ... (200; 34.472062ms)
Jan  6 14:30:48.416: INFO: (0) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:462/proxy/: tls qux (200; 47.247745ms)
Jan  6 14:30:48.418: INFO: (0) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname1/proxy/: tls baz (200; 48.307453ms)
Jan  6 14:30:48.419: INFO: (0) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname2/proxy/: tls qux (200; 49.915432ms)
Jan  6 14:30:48.419: INFO: (0) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:460/proxy/: tls baz (200; 49.7652ms)
Jan  6 14:30:48.422: INFO: (0) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: ... (200; 15.289715ms)
Jan  6 14:30:48.438: INFO: (1) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:462/proxy/: tls qux (200; 15.895922ms)
Jan  6 14:30:48.438: INFO: (1) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2/proxy/: test (200; 15.755576ms)
Jan  6 14:30:48.438: INFO: (1) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:1080/proxy/: test<... (200; 15.946817ms)
Jan  6 14:30:48.438: INFO: (1) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 15.981551ms)
Jan  6 14:30:48.438: INFO: (1) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:460/proxy/: tls baz (200; 16.120592ms)
Jan  6 14:30:48.443: INFO: (1) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname2/proxy/: tls qux (200; 21.437717ms)
Jan  6 14:30:48.444: INFO: (1) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname1/proxy/: foo (200; 21.897625ms)
Jan  6 14:30:48.444: INFO: (1) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname2/proxy/: bar (200; 21.747601ms)
Jan  6 14:30:48.444: INFO: (1) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname2/proxy/: bar (200; 21.851065ms)
Jan  6 14:30:48.445: INFO: (1) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname1/proxy/: foo (200; 22.857464ms)
Jan  6 14:30:48.445: INFO: (1) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname1/proxy/: tls baz (200; 23.017397ms)
Jan  6 14:30:48.464: INFO: (2) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname2/proxy/: bar (200; 18.587935ms)
Jan  6 14:30:48.464: INFO: (2) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname1/proxy/: tls baz (200; 18.516902ms)
Jan  6 14:30:48.465: INFO: (2) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname2/proxy/: tls qux (200; 18.417922ms)
Jan  6 14:30:48.465: INFO: (2) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 19.09155ms)
Jan  6 14:30:48.465: INFO: (2) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname1/proxy/: foo (200; 19.233049ms)
Jan  6 14:30:48.465: INFO: (2) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:1080/proxy/: ... (200; 19.826587ms)
Jan  6 14:30:48.465: INFO: (2) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:462/proxy/: tls qux (200; 19.3572ms)
Jan  6 14:30:48.477: INFO: (2) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 31.186733ms)
Jan  6 14:30:48.478: INFO: (2) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:1080/proxy/: test<... (200; 31.627876ms)
Jan  6 14:30:48.478: INFO: (2) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname2/proxy/: bar (200; 32.553804ms)
Jan  6 14:30:48.478: INFO: (2) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2/proxy/: test (200; 32.106825ms)
Jan  6 14:30:48.479: INFO: (2) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 32.420883ms)
Jan  6 14:30:48.479: INFO: (2) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:460/proxy/: tls baz (200; 32.680919ms)
Jan  6 14:30:48.479: INFO: (2) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: test<... (200; 17.44203ms)
Jan  6 14:30:48.499: INFO: (3) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:462/proxy/: tls qux (200; 18.517207ms)
Jan  6 14:30:48.499: INFO: (3) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: ... (200; 19.033496ms)
Jan  6 14:30:48.500: INFO: (3) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname2/proxy/: bar (200; 19.363425ms)
Jan  6 14:30:48.500: INFO: (3) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2/proxy/: test (200; 19.512757ms)
Jan  6 14:30:48.500: INFO: (3) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname2/proxy/: bar (200; 19.878544ms)
Jan  6 14:30:48.503: INFO: (3) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname1/proxy/: foo (200; 22.807105ms)
Jan  6 14:30:48.504: INFO: (3) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname2/proxy/: tls qux (200; 23.440389ms)
Jan  6 14:30:48.506: INFO: (3) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname1/proxy/: tls baz (200; 25.487687ms)
Jan  6 14:30:48.507: INFO: (3) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname1/proxy/: foo (200; 27.134215ms)
Jan  6 14:30:48.532: INFO: (4) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:1080/proxy/: test<... (200; 24.392901ms)
Jan  6 14:30:48.533: INFO: (4) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 24.90159ms)
Jan  6 14:30:48.535: INFO: (4) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2/proxy/: test (200; 27.422478ms)
Jan  6 14:30:48.535: INFO: (4) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:460/proxy/: tls baz (200; 27.824637ms)
Jan  6 14:30:48.535: INFO: (4) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:1080/proxy/: ... (200; 27.452929ms)
Jan  6 14:30:48.535: INFO: (4) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 27.339201ms)
Jan  6 14:30:48.535: INFO: (4) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: test (200; 22.709155ms)
Jan  6 14:30:48.562: INFO: (5) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 23.222279ms)
Jan  6 14:30:48.562: INFO: (5) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 23.042516ms)
Jan  6 14:30:48.562: INFO: (5) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:1080/proxy/: test<... (200; 23.206475ms)
Jan  6 14:30:48.563: INFO: (5) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 24.053245ms)
Jan  6 14:30:48.564: INFO: (5) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname1/proxy/: foo (200; 25.008195ms)
Jan  6 14:30:48.564: INFO: (5) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname2/proxy/: tls qux (200; 25.636339ms)
Jan  6 14:30:48.565: INFO: (5) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:462/proxy/: tls qux (200; 26.586063ms)
Jan  6 14:30:48.567: INFO: (5) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname1/proxy/: foo (200; 27.805997ms)
Jan  6 14:30:48.567: INFO: (5) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:1080/proxy/: ... (200; 28.169951ms)
Jan  6 14:30:48.568: INFO: (5) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname1/proxy/: tls baz (200; 29.126683ms)
Jan  6 14:30:48.568: INFO: (5) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname2/proxy/: bar (200; 29.49782ms)
Jan  6 14:30:48.568: INFO: (5) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:460/proxy/: tls baz (200; 29.192538ms)
Jan  6 14:30:48.568: INFO: (5) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 29.5174ms)
Jan  6 14:30:48.569: INFO: (5) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname2/proxy/: bar (200; 29.905744ms)
Jan  6 14:30:48.571: INFO: (5) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: test (200; 12.627014ms)
Jan  6 14:30:48.584: INFO: (6) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: ... (200; 12.431284ms)
Jan  6 14:30:48.585: INFO: (6) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:1080/proxy/: test<... (200; 13.159635ms)
Jan  6 14:30:48.585: INFO: (6) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 13.196571ms)
Jan  6 14:30:48.622: INFO: (6) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname1/proxy/: foo (200; 50.262137ms)
Jan  6 14:30:48.622: INFO: (6) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname1/proxy/: tls baz (200; 50.613524ms)
Jan  6 14:30:48.623: INFO: (6) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname2/proxy/: bar (200; 50.595732ms)
Jan  6 14:30:48.623: INFO: (6) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:460/proxy/: tls baz (200; 51.105995ms)
Jan  6 14:30:48.623: INFO: (6) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 50.991743ms)
Jan  6 14:30:48.623: INFO: (6) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname2/proxy/: bar (200; 51.399327ms)
Jan  6 14:30:48.624: INFO: (6) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname1/proxy/: foo (200; 51.655297ms)
Jan  6 14:30:48.645: INFO: (6) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname2/proxy/: tls qux (200; 73.130988ms)
Jan  6 14:30:48.664: INFO: (7) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:1080/proxy/: test<... (200; 18.287854ms)
Jan  6 14:30:48.665: INFO: (7) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:1080/proxy/: ... (200; 19.223193ms)
Jan  6 14:30:48.665: INFO: (7) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 19.361541ms)
Jan  6 14:30:48.665: INFO: (7) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: test (200; 21.751663ms)
Jan  6 14:30:48.668: INFO: (7) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:462/proxy/: tls qux (200; 22.045914ms)
Jan  6 14:30:48.668: INFO: (7) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 22.554097ms)
Jan  6 14:30:48.668: INFO: (7) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 22.781778ms)
Jan  6 14:30:48.668: INFO: (7) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname2/proxy/: bar (200; 23.188657ms)
Jan  6 14:30:48.669: INFO: (7) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 23.639578ms)
Jan  6 14:30:48.677: INFO: (8) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:1080/proxy/: ... (200; 7.68453ms)
Jan  6 14:30:48.677: INFO: (8) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2/proxy/: test (200; 7.778218ms)
Jan  6 14:30:48.689: INFO: (8) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:1080/proxy/: test<... (200; 19.462873ms)
Jan  6 14:30:48.689: INFO: (8) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 19.718552ms)
Jan  6 14:30:48.690: INFO: (8) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: ... (200; 4.640843ms)
Jan  6 14:30:48.700: INFO: (9) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 4.781197ms)
Jan  6 14:30:48.700: INFO: (9) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:462/proxy/: tls qux (200; 4.729158ms)
Jan  6 14:30:48.701: INFO: (9) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 5.934311ms)
Jan  6 14:30:48.701: INFO: (9) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 6.552993ms)
Jan  6 14:30:48.702: INFO: (9) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:1080/proxy/: test<... (200; 6.553676ms)
Jan  6 14:30:48.702: INFO: (9) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: test (200; 13.099221ms)
Jan  6 14:30:48.708: INFO: (9) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname2/proxy/: tls qux (200; 13.144728ms)
Jan  6 14:30:48.708: INFO: (9) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname2/proxy/: bar (200; 13.178801ms)
Jan  6 14:30:48.709: INFO: (9) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname1/proxy/: foo (200; 14.615799ms)
Jan  6 14:30:48.765: INFO: (10) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname1/proxy/: foo (200; 55.417753ms)
Jan  6 14:30:48.766: INFO: (10) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 56.374735ms)
Jan  6 14:30:48.767: INFO: (10) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:1080/proxy/: ... (200; 56.717487ms)
Jan  6 14:30:48.767: INFO: (10) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2/proxy/: test (200; 56.908457ms)
Jan  6 14:30:48.767: INFO: (10) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 56.596315ms)
Jan  6 14:30:48.768: INFO: (10) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 57.932818ms)
Jan  6 14:30:48.768: INFO: (10) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:1080/proxy/: test<... (200; 58.329212ms)
Jan  6 14:30:48.768: INFO: (10) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname2/proxy/: bar (200; 58.302244ms)
Jan  6 14:30:48.768: INFO: (10) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname1/proxy/: foo (200; 58.411836ms)
Jan  6 14:30:48.768: INFO: (10) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:460/proxy/: tls baz (200; 58.552916ms)
Jan  6 14:30:48.768: INFO: (10) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname1/proxy/: tls baz (200; 58.271615ms)
Jan  6 14:30:48.768: INFO: (10) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 58.570939ms)
Jan  6 14:30:48.768: INFO: (10) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname2/proxy/: bar (200; 58.481912ms)
Jan  6 14:30:48.769: INFO: (10) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: test (200; 10.416037ms)
Jan  6 14:30:48.781: INFO: (11) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: ... (200; 11.706556ms)
Jan  6 14:30:48.782: INFO: (11) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname2/proxy/: bar (200; 12.001303ms)
Jan  6 14:30:48.782: INFO: (11) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 12.562544ms)
Jan  6 14:30:48.782: INFO: (11) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname1/proxy/: foo (200; 12.626168ms)
Jan  6 14:30:48.783: INFO: (11) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname2/proxy/: tls qux (200; 12.625139ms)
Jan  6 14:30:48.783: INFO: (11) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:462/proxy/: tls qux (200; 12.833295ms)
Jan  6 14:30:48.785: INFO: (11) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:460/proxy/: tls baz (200; 14.829713ms)
Jan  6 14:30:48.785: INFO: (11) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 14.961204ms)
Jan  6 14:30:48.785: INFO: (11) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 14.81716ms)
Jan  6 14:30:48.785: INFO: (11) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:1080/proxy/: test<... (200; 14.855007ms)
Jan  6 14:30:48.785: INFO: (11) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 14.991306ms)
Jan  6 14:30:48.785: INFO: (11) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname1/proxy/: tls baz (200; 14.825574ms)
Jan  6 14:30:48.794: INFO: (12) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 8.804511ms)
Jan  6 14:30:48.796: INFO: (12) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:462/proxy/: tls qux (200; 10.502902ms)
Jan  6 14:30:48.796: INFO: (12) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2/proxy/: test (200; 10.715346ms)
Jan  6 14:30:48.796: INFO: (12) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:1080/proxy/: ... (200; 10.842415ms)
Jan  6 14:30:48.796: INFO: (12) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 10.668682ms)
Jan  6 14:30:48.796: INFO: (12) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 10.868002ms)
Jan  6 14:30:48.796: INFO: (12) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:1080/proxy/: test<... (200; 10.698567ms)
Jan  6 14:30:48.796: INFO: (12) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: test<... (200; 8.980088ms)
Jan  6 14:30:48.811: INFO: (13) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname2/proxy/: bar (200; 12.716836ms)
Jan  6 14:30:48.812: INFO: (13) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname2/proxy/: bar (200; 13.85242ms)
Jan  6 14:30:48.815: INFO: (13) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:462/proxy/: tls qux (200; 15.873938ms)
Jan  6 14:30:48.815: INFO: (13) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname1/proxy/: tls baz (200; 15.972294ms)
Jan  6 14:30:48.815: INFO: (13) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2/proxy/: test (200; 15.932204ms)
Jan  6 14:30:48.815: INFO: (13) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 15.962065ms)
Jan  6 14:30:48.815: INFO: (13) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname1/proxy/: foo (200; 16.063962ms)
Jan  6 14:30:48.815: INFO: (13) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: ... (200; 16.0399ms)
Jan  6 14:30:48.815: INFO: (13) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 16.150245ms)
Jan  6 14:30:48.815: INFO: (13) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname2/proxy/: tls qux (200; 15.995764ms)
Jan  6 14:30:48.815: INFO: (13) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 16.394375ms)
Jan  6 14:30:48.829: INFO: (14) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 13.418489ms)
Jan  6 14:30:48.829: INFO: (14) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 13.148074ms)
Jan  6 14:30:48.829: INFO: (14) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2/proxy/: test (200; 13.136007ms)
Jan  6 14:30:48.829: INFO: (14) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 13.185939ms)
Jan  6 14:30:48.829: INFO: (14) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:1080/proxy/: test<... (200; 13.37283ms)
Jan  6 14:30:48.830: INFO: (14) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:1080/proxy/: ... (200; 14.921306ms)
Jan  6 14:30:48.831: INFO: (14) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:460/proxy/: tls baz (200; 15.932582ms)
Jan  6 14:30:48.831: INFO: (14) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:462/proxy/: tls qux (200; 15.862032ms)
Jan  6 14:30:48.831: INFO: (14) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: ... (200; 15.951673ms)
Jan  6 14:30:48.853: INFO: (15) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname2/proxy/: tls qux (200; 16.02237ms)
Jan  6 14:30:48.853: INFO: (15) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2/proxy/: test (200; 16.0074ms)
Jan  6 14:30:48.853: INFO: (15) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:462/proxy/: tls qux (200; 16.080977ms)
Jan  6 14:30:48.853: INFO: (15) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:460/proxy/: tls baz (200; 16.192222ms)
Jan  6 14:30:48.853: INFO: (15) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname1/proxy/: foo (200; 16.19867ms)
Jan  6 14:30:48.853: INFO: (15) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:1080/proxy/: test<... (200; 16.069548ms)
Jan  6 14:30:48.853: INFO: (15) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname2/proxy/: bar (200; 16.343924ms)
Jan  6 14:30:48.853: INFO: (15) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: test<... (200; 11.983184ms)
Jan  6 14:30:48.870: INFO: (16) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname1/proxy/: tls baz (200; 12.022428ms)
Jan  6 14:30:48.870: INFO: (16) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 12.611756ms)
Jan  6 14:30:48.870: INFO: (16) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname2/proxy/: bar (200; 12.676267ms)
Jan  6 14:30:48.871: INFO: (16) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:460/proxy/: tls baz (200; 12.750453ms)
Jan  6 14:30:48.871: INFO: (16) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname2/proxy/: bar (200; 12.93044ms)
Jan  6 14:30:48.871: INFO: (16) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 12.820645ms)
Jan  6 14:30:48.871: INFO: (16) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2/proxy/: test (200; 12.778378ms)
Jan  6 14:30:48.871: INFO: (16) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:1080/proxy/: ... (200; 12.867292ms)
Jan  6 14:30:48.871: INFO: (16) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname1/proxy/: foo (200; 12.923632ms)
Jan  6 14:30:48.871: INFO: (16) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: ... (200; 6.974542ms)
Jan  6 14:30:48.879: INFO: (17) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 7.916759ms)
Jan  6 14:30:48.880: INFO: (17) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:1080/proxy/: test<... (200; 8.119801ms)
Jan  6 14:30:48.880: INFO: (17) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 8.467258ms)
Jan  6 14:30:48.881: INFO: (17) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2/proxy/: test (200; 9.398897ms)
Jan  6 14:30:48.881: INFO: (17) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 9.483823ms)
Jan  6 14:30:48.881: INFO: (17) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname1/proxy/: tls baz (200; 9.659156ms)
Jan  6 14:30:48.881: INFO: (17) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname2/proxy/: bar (200; 9.985256ms)
Jan  6 14:30:48.882: INFO: (17) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 10.01682ms)
Jan  6 14:30:48.883: INFO: (17) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname2/proxy/: tls qux (200; 11.075509ms)
Jan  6 14:30:48.883: INFO: (17) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:462/proxy/: tls qux (200; 11.087908ms)
Jan  6 14:30:48.885: INFO: (17) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname1/proxy/: foo (200; 14.038502ms)
Jan  6 14:30:48.886: INFO: (17) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname1/proxy/: foo (200; 14.654009ms)
Jan  6 14:30:48.887: INFO: (17) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname2/proxy/: bar (200; 15.879398ms)
Jan  6 14:30:48.901: INFO: (18) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 13.683761ms)
Jan  6 14:30:48.905: INFO: (18) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:1080/proxy/: test<... (200; 17.278593ms)
Jan  6 14:30:48.913: INFO: (18) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:1080/proxy/: ... (200; 25.345445ms)
Jan  6 14:30:48.913: INFO: (18) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 25.517704ms)
Jan  6 14:30:48.913: INFO: (18) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname2/proxy/: bar (200; 25.565135ms)
Jan  6 14:30:48.913: INFO: (18) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: test (200; 30.145772ms)
Jan  6 14:30:48.918: INFO: (18) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 30.379138ms)
Jan  6 14:30:48.918: INFO: (18) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname1/proxy/: tls baz (200; 30.832005ms)
Jan  6 14:30:48.920: INFO: (18) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:462/proxy/: tls qux (200; 32.873272ms)
Jan  6 14:30:48.934: INFO: (19) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:443/proxy/: test<... (200; 14.795603ms)
Jan  6 14:30:48.937: INFO: (19) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname2/proxy/: tls qux (200; 16.421882ms)
Jan  6 14:30:48.937: INFO: (19) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 16.738186ms)
Jan  6 14:30:48.937: INFO: (19) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:1080/proxy/: ... (200; 16.526614ms)
Jan  6 14:30:48.937: INFO: (19) /api/v1/namespaces/proxy-6385/pods/https:proxy-service-9jg89-6tdg2:462/proxy/: tls qux (200; 16.532836ms)
Jan  6 14:30:48.937: INFO: (19) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 16.597699ms)
Jan  6 14:30:48.937: INFO: (19) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname1/proxy/: foo (200; 16.909705ms)
Jan  6 14:30:48.937: INFO: (19) /api/v1/namespaces/proxy-6385/services/https:proxy-service-9jg89:tlsportname1/proxy/: tls baz (200; 16.669786ms)
Jan  6 14:30:48.937: INFO: (19) /api/v1/namespaces/proxy-6385/pods/proxy-service-9jg89-6tdg2/proxy/: test (200; 16.622125ms)
Jan  6 14:30:48.937: INFO: (19) /api/v1/namespaces/proxy-6385/services/proxy-service-9jg89:portname2/proxy/: bar (200; 16.738308ms)
Jan  6 14:30:48.937: INFO: (19) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:160/proxy/: foo (200; 16.747667ms)
Jan  6 14:30:48.937: INFO: (19) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname1/proxy/: foo (200; 16.771597ms)
Jan  6 14:30:48.938: INFO: (19) /api/v1/namespaces/proxy-6385/pods/http:proxy-service-9jg89-6tdg2:162/proxy/: bar (200; 16.994219ms)
Jan  6 14:30:48.938: INFO: (19) /api/v1/namespaces/proxy-6385/services/http:proxy-service-9jg89:portname2/proxy/: bar (200; 17.651942ms)
STEP: deleting ReplicationController proxy-service-9jg89 in namespace proxy-6385, will wait for the garbage collector to delete the pods
Jan  6 14:30:49.003: INFO: Deleting ReplicationController proxy-service-9jg89 took: 10.227781ms
Jan  6 14:30:49.304: INFO: Terminating ReplicationController proxy-service-9jg89 pods took: 301.197669ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:30:54.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6385" for this suite.
Jan  6 14:31:00.746: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:31:00.923: INFO: namespace proxy-6385 deletion completed in 6.208333011s

• [SLOW TEST:21.824 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:31:00.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-map-f813426e-aec3-4485-b9e0-84fbec98f069
STEP: Creating a pod to test consume secrets
Jan  6 14:31:01.104: INFO: Waiting up to 5m0s for pod "pod-secrets-b85cb5df-3cbd-4a18-b436-bebcb71fa2cf" in namespace "secrets-7361" to be "success or failure"
Jan  6 14:31:01.112: INFO: Pod "pod-secrets-b85cb5df-3cbd-4a18-b436-bebcb71fa2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.529699ms
Jan  6 14:31:03.120: INFO: Pod "pod-secrets-b85cb5df-3cbd-4a18-b436-bebcb71fa2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016163918s
Jan  6 14:31:05.136: INFO: Pod "pod-secrets-b85cb5df-3cbd-4a18-b436-bebcb71fa2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032257492s
Jan  6 14:31:07.145: INFO: Pod "pod-secrets-b85cb5df-3cbd-4a18-b436-bebcb71fa2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041302762s
Jan  6 14:31:09.154: INFO: Pod "pod-secrets-b85cb5df-3cbd-4a18-b436-bebcb71fa2cf": Phase="Pending", Reason="", readiness=false. Elapsed: 8.050136585s
Jan  6 14:31:11.165: INFO: Pod "pod-secrets-b85cb5df-3cbd-4a18-b436-bebcb71fa2cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061124651s
STEP: Saw pod success
Jan  6 14:31:11.165: INFO: Pod "pod-secrets-b85cb5df-3cbd-4a18-b436-bebcb71fa2cf" satisfied condition "success or failure"
Jan  6 14:31:11.184: INFO: Trying to get logs from node iruya-node pod pod-secrets-b85cb5df-3cbd-4a18-b436-bebcb71fa2cf container secret-volume-test: 
STEP: delete the pod
Jan  6 14:31:11.363: INFO: Waiting for pod pod-secrets-b85cb5df-3cbd-4a18-b436-bebcb71fa2cf to disappear
Jan  6 14:31:11.369: INFO: Pod pod-secrets-b85cb5df-3cbd-4a18-b436-bebcb71fa2cf no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:31:11.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7361" for this suite.
Jan  6 14:31:17.403: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:31:17.521: INFO: namespace secrets-7361 deletion completed in 6.144274587s

• [SLOW TEST:16.598 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:31:17.522: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-7121
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a new StatefulSet
Jan  6 14:31:17.762: INFO: Found 0 stateful pods, waiting for 3
Jan  6 14:31:27.775: INFO: Found 2 stateful pods, waiting for 3
Jan  6 14:31:37.785: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 14:31:37.785: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 14:31:37.785: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  6 14:31:47.772: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 14:31:47.772: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 14:31:47.772: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 14:31:47.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7121 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  6 14:31:48.293: INFO: stderr: "I0106 14:31:48.112402    3591 log.go:172] (0xc00013edc0) (0xc000654780) Create stream\nI0106 14:31:48.112699    3591 log.go:172] (0xc00013edc0) (0xc000654780) Stream added, broadcasting: 1\nI0106 14:31:48.116029    3591 log.go:172] (0xc00013edc0) Reply frame received for 1\nI0106 14:31:48.116062    3591 log.go:172] (0xc00013edc0) (0xc0005a0000) Create stream\nI0106 14:31:48.116069    3591 log.go:172] (0xc00013edc0) (0xc0005a0000) Stream added, broadcasting: 3\nI0106 14:31:48.117123    3591 log.go:172] (0xc00013edc0) Reply frame received for 3\nI0106 14:31:48.117149    3591 log.go:172] (0xc00013edc0) (0xc0007a6000) Create stream\nI0106 14:31:48.117163    3591 log.go:172] (0xc00013edc0) (0xc0007a6000) Stream added, broadcasting: 5\nI0106 14:31:48.118970    3591 log.go:172] (0xc00013edc0) Reply frame received for 5\nI0106 14:31:48.191024    3591 log.go:172] (0xc00013edc0) Data frame received for 5\nI0106 14:31:48.191087    3591 log.go:172] (0xc0007a6000) (5) Data frame handling\nI0106 14:31:48.191105    3591 log.go:172] (0xc0007a6000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0106 14:31:48.214932    3591 log.go:172] (0xc00013edc0) Data frame received for 3\nI0106 14:31:48.214956    3591 log.go:172] (0xc0005a0000) (3) Data frame handling\nI0106 14:31:48.214967    3591 log.go:172] (0xc0005a0000) (3) Data frame sent\nI0106 14:31:48.287175    3591 log.go:172] (0xc00013edc0) (0xc0005a0000) Stream removed, broadcasting: 3\nI0106 14:31:48.287320    3591 log.go:172] (0xc00013edc0) Data frame received for 1\nI0106 14:31:48.287360    3591 log.go:172] (0xc000654780) (1) Data frame handling\nI0106 14:31:48.287387    3591 log.go:172] (0xc000654780) (1) Data frame sent\nI0106 14:31:48.287406    3591 log.go:172] (0xc00013edc0) (0xc000654780) Stream removed, broadcasting: 1\nI0106 14:31:48.287459    3591 log.go:172] (0xc00013edc0) (0xc0007a6000) Stream removed, broadcasting: 5\nI0106 14:31:48.287547    3591 log.go:172] (0xc00013edc0) Go away received\nI0106 14:31:48.288220    3591 log.go:172] (0xc00013edc0) (0xc000654780) Stream removed, broadcasting: 1\nI0106 14:31:48.288243    3591 log.go:172] (0xc00013edc0) (0xc0005a0000) Stream removed, broadcasting: 3\nI0106 14:31:48.288258    3591 log.go:172] (0xc00013edc0) (0xc0007a6000) Stream removed, broadcasting: 5\n"
Jan  6 14:31:48.293: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  6 14:31:48.293: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/nginx:1.14-alpine to docker.io/library/nginx:1.15-alpine
Jan  6 14:31:58.351: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Jan  6 14:32:08.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7121 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:32:08.774: INFO: stderr: "I0106 14:32:08.607756    3611 log.go:172] (0xc0006c0a50) (0xc000356780) Create stream\nI0106 14:32:08.608055    3611 log.go:172] (0xc0006c0a50) (0xc000356780) Stream added, broadcasting: 1\nI0106 14:32:08.611788    3611 log.go:172] (0xc0006c0a50) Reply frame received for 1\nI0106 14:32:08.611890    3611 log.go:172] (0xc0006c0a50) (0xc000998000) Create stream\nI0106 14:32:08.611916    3611 log.go:172] (0xc0006c0a50) (0xc000998000) Stream added, broadcasting: 3\nI0106 14:32:08.612772    3611 log.go:172] (0xc0006c0a50) Reply frame received for 3\nI0106 14:32:08.612797    3611 log.go:172] (0xc0006c0a50) (0xc000794000) Create stream\nI0106 14:32:08.612809    3611 log.go:172] (0xc0006c0a50) (0xc000794000) Stream added, broadcasting: 5\nI0106 14:32:08.614322    3611 log.go:172] (0xc0006c0a50) Reply frame received for 5\nI0106 14:32:08.695831    3611 log.go:172] (0xc0006c0a50) Data frame received for 5\nI0106 14:32:08.695947    3611 log.go:172] (0xc000794000) (5) Data frame handling\nI0106 14:32:08.695968    3611 log.go:172] (0xc000794000) (5) Data frame sent\nI0106 14:32:08.695979    3611 log.go:172] (0xc0006c0a50) Data frame received for 3\nI0106 14:32:08.695997    3611 log.go:172] (0xc000998000) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0106 14:32:08.696007    3611 log.go:172] (0xc000998000) (3) Data frame sent\nI0106 14:32:08.767654    3611 log.go:172] (0xc0006c0a50) (0xc000998000) Stream removed, broadcasting: 3\nI0106 14:32:08.767765    3611 log.go:172] (0xc0006c0a50) Data frame received for 1\nI0106 14:32:08.767791    3611 log.go:172] (0xc000356780) (1) Data frame handling\nI0106 14:32:08.767805    3611 log.go:172] (0xc000356780) (1) Data frame sent\nI0106 14:32:08.767855    3611 log.go:172] (0xc0006c0a50) (0xc000356780) Stream removed, broadcasting: 1\nI0106 14:32:08.768097    3611 log.go:172] (0xc0006c0a50) (0xc000794000) Stream removed, broadcasting: 5\nI0106 14:32:08.768118    3611 log.go:172] (0xc0006c0a50) Go away received\nI0106 14:32:08.768557    3611 log.go:172] (0xc0006c0a50) (0xc000356780) Stream removed, broadcasting: 1\nI0106 14:32:08.768573    3611 log.go:172] (0xc0006c0a50) (0xc000998000) Stream removed, broadcasting: 3\nI0106 14:32:08.768581    3611 log.go:172] (0xc0006c0a50) (0xc000794000) Stream removed, broadcasting: 5\n"
Jan  6 14:32:08.774: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  6 14:32:08.774: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  6 14:32:18.831: INFO: Waiting for StatefulSet statefulset-7121/ss2 to complete update
Jan  6 14:32:18.831: INFO: Waiting for Pod statefulset-7121/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  6 14:32:18.831: INFO: Waiting for Pod statefulset-7121/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  6 14:32:28.898: INFO: Waiting for StatefulSet statefulset-7121/ss2 to complete update
Jan  6 14:32:28.899: INFO: Waiting for Pod statefulset-7121/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  6 14:32:28.899: INFO: Waiting for Pod statefulset-7121/ss2-1 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  6 14:32:39.436: INFO: Waiting for StatefulSet statefulset-7121/ss2 to complete update
Jan  6 14:32:39.436: INFO: Waiting for Pod statefulset-7121/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  6 14:32:48.846: INFO: Waiting for StatefulSet statefulset-7121/ss2 to complete update
Jan  6 14:32:48.846: INFO: Waiting for Pod statefulset-7121/ss2-0 to have revision ss2-6c5cd755cd update revision ss2-7c9b54fd4c
Jan  6 14:32:58.848: INFO: Waiting for StatefulSet statefulset-7121/ss2 to complete update
STEP: Rolling back to a previous revision
Jan  6 14:33:08.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7121 ss2-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  6 14:33:09.406: INFO: stderr: "I0106 14:33:09.090232    3632 log.go:172] (0xc0009b66e0) (0xc000718aa0) Create stream\nI0106 14:33:09.090647    3632 log.go:172] (0xc0009b66e0) (0xc000718aa0) Stream added, broadcasting: 1\nI0106 14:33:09.095360    3632 log.go:172] (0xc0009b66e0) Reply frame received for 1\nI0106 14:33:09.095452    3632 log.go:172] (0xc0009b66e0) (0xc000382140) Create stream\nI0106 14:33:09.095480    3632 log.go:172] (0xc0009b66e0) (0xc000382140) Stream added, broadcasting: 3\nI0106 14:33:09.096709    3632 log.go:172] (0xc0009b66e0) Reply frame received for 3\nI0106 14:33:09.096743    3632 log.go:172] (0xc0009b66e0) (0xc000718b40) Create stream\nI0106 14:33:09.096755    3632 log.go:172] (0xc0009b66e0) (0xc000718b40) Stream added, broadcasting: 5\nI0106 14:33:09.097784    3632 log.go:172] (0xc0009b66e0) Reply frame received for 5\nI0106 14:33:09.246053    3632 log.go:172] (0xc0009b66e0) Data frame received for 5\nI0106 14:33:09.246503    3632 log.go:172] (0xc000718b40) (5) Data frame handling\nI0106 14:33:09.246639    3632 log.go:172] (0xc000718b40) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0106 14:33:09.305070    3632 log.go:172] (0xc0009b66e0) Data frame received for 3\nI0106 14:33:09.305125    3632 log.go:172] (0xc000382140) (3) Data frame handling\nI0106 14:33:09.305180    3632 log.go:172] (0xc000382140) (3) Data frame sent\nI0106 14:33:09.392005    3632 log.go:172] (0xc0009b66e0) Data frame received for 1\nI0106 14:33:09.392154    3632 log.go:172] (0xc0009b66e0) (0xc000382140) Stream removed, broadcasting: 3\nI0106 14:33:09.392461    3632 log.go:172] (0xc000718aa0) (1) Data frame handling\nI0106 14:33:09.392528    3632 log.go:172] (0xc000718aa0) (1) Data frame sent\nI0106 14:33:09.392780    3632 log.go:172] (0xc0009b66e0) (0xc000718b40) Stream removed, broadcasting: 5\nI0106 14:33:09.392955    3632 log.go:172] (0xc0009b66e0) (0xc000718aa0) Stream removed, broadcasting: 1\nI0106 14:33:09.393060    3632 log.go:172] (0xc0009b66e0) Go away received\nI0106 14:33:09.394823    3632 log.go:172] (0xc0009b66e0) (0xc000718aa0) Stream removed, broadcasting: 1\nI0106 14:33:09.394865    3632 log.go:172] (0xc0009b66e0) (0xc000382140) Stream removed, broadcasting: 3\nI0106 14:33:09.394889    3632 log.go:172] (0xc0009b66e0) (0xc000718b40) Stream removed, broadcasting: 5\n"
Jan  6 14:33:09.406: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  6 14:33:09.406: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss2-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  6 14:33:19.470: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Jan  6 14:33:29.544: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7121 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:33:30.164: INFO: stderr: "I0106 14:33:29.862239    3656 log.go:172] (0xc000a46370) (0xc0009186e0) Create stream\nI0106 14:33:29.863130    3656 log.go:172] (0xc000a46370) (0xc0009186e0) Stream added, broadcasting: 1\nI0106 14:33:29.870695    3656 log.go:172] (0xc000a46370) Reply frame received for 1\nI0106 14:33:29.870962    3656 log.go:172] (0xc000a46370) (0xc00067e1e0) Create stream\nI0106 14:33:29.870983    3656 log.go:172] (0xc000a46370) (0xc00067e1e0) Stream added, broadcasting: 3\nI0106 14:33:29.875622    3656 log.go:172] (0xc000a46370) Reply frame received for 3\nI0106 14:33:29.876173    3656 log.go:172] (0xc000a46370) (0xc000918780) Create stream\nI0106 14:33:29.876242    3656 log.go:172] (0xc000a46370) (0xc000918780) Stream added, broadcasting: 5\nI0106 14:33:29.882478    3656 log.go:172] (0xc000a46370) Reply frame received for 5\nI0106 14:33:30.032617    3656 log.go:172] (0xc000a46370) Data frame received for 5\nI0106 14:33:30.032819    3656 log.go:172] (0xc000918780) (5) Data frame handling\nI0106 14:33:30.032890    3656 log.go:172] (0xc000918780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0106 14:33:30.033283    3656 log.go:172] (0xc000a46370) Data frame received for 3\nI0106 14:33:30.033298    3656 log.go:172] (0xc00067e1e0) (3) Data frame handling\nI0106 14:33:30.033313    3656 log.go:172] (0xc00067e1e0) (3) Data frame sent\nI0106 14:33:30.148914    3656 log.go:172] (0xc000a46370) Data frame received for 1\nI0106 14:33:30.149104    3656 log.go:172] (0xc000a46370) (0xc00067e1e0) Stream removed, broadcasting: 3\nI0106 14:33:30.149291    3656 log.go:172] (0xc000a46370) (0xc000918780) Stream removed, broadcasting: 5\nI0106 14:33:30.149678    3656 log.go:172] (0xc0009186e0) (1) Data frame handling\nI0106 14:33:30.149981    3656 log.go:172] (0xc0009186e0) (1) Data frame sent\nI0106 14:33:30.150058    3656 log.go:172] (0xc000a46370) (0xc0009186e0) Stream removed, broadcasting: 1\nI0106 14:33:30.150170    3656 log.go:172] (0xc000a46370) Go away received\nI0106 14:33:30.152201    3656 log.go:172] (0xc000a46370) (0xc0009186e0) Stream removed, broadcasting: 1\nI0106 14:33:30.152391    3656 log.go:172] (0xc000a46370) (0xc00067e1e0) Stream removed, broadcasting: 3\nI0106 14:33:30.152423    3656 log.go:172] (0xc000a46370) (0xc000918780) Stream removed, broadcasting: 5\n"
Jan  6 14:33:30.164: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  6 14:33:30.164: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss2-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  6 14:33:40.217: INFO: Waiting for StatefulSet statefulset-7121/ss2 to complete update
Jan  6 14:33:40.217: INFO: Waiting for Pod statefulset-7121/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  6 14:33:40.217: INFO: Waiting for Pod statefulset-7121/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  6 14:33:40.217: INFO: Waiting for Pod statefulset-7121/ss2-2 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  6 14:33:50.262: INFO: Waiting for StatefulSet statefulset-7121/ss2 to complete update
Jan  6 14:33:50.262: INFO: Waiting for Pod statefulset-7121/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  6 14:33:50.262: INFO: Waiting for Pod statefulset-7121/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  6 14:34:00.235: INFO: Waiting for StatefulSet statefulset-7121/ss2 to complete update
Jan  6 14:34:00.235: INFO: Waiting for Pod statefulset-7121/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  6 14:34:00.235: INFO: Waiting for Pod statefulset-7121/ss2-1 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  6 14:34:10.234: INFO: Waiting for StatefulSet statefulset-7121/ss2 to complete update
Jan  6 14:34:10.234: INFO: Waiting for Pod statefulset-7121/ss2-0 to have revision ss2-7c9b54fd4c update revision ss2-6c5cd755cd
Jan  6 14:34:20.244: INFO: Waiting for StatefulSet statefulset-7121/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  6 14:34:30.237: INFO: Deleting all statefulset in ns statefulset-7121
Jan  6 14:34:30.241: INFO: Scaling statefulset ss2 to 0
Jan  6 14:35:10.274: INFO: Waiting for statefulset status.replicas updated to 0
Jan  6 14:35:10.279: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:35:10.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7121" for this suite.
Jan  6 14:35:18.387: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:35:18.466: INFO: namespace statefulset-7121 deletion completed in 8.109307639s

• [SLOW TEST:240.944 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:35:18.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1612
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  6 14:35:18.578: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/nginx:1.14-alpine --namespace=kubectl-8136'
Jan  6 14:35:18.722: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  6 14:35:18.722: INFO: stdout: "job.batch/e2e-test-nginx-job created\n"
STEP: verifying the job e2e-test-nginx-job was created
[AfterEach] [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1617
Jan  6 14:35:18.751: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-nginx-job --namespace=kubectl-8136'
Jan  6 14:35:19.026: INFO: stderr: ""
Jan  6 14:35:19.026: INFO: stdout: "job.batch \"e2e-test-nginx-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:35:19.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8136" for this suite.
Jan  6 14:35:25.116: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:35:25.241: INFO: namespace kubectl-8136 deletion completed in 6.207205393s

• [SLOW TEST:6.775 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:35:25.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-151cb600-f837-4867-90e9-b5d6d3bda267
STEP: Creating a pod to test consume configMaps
Jan  6 14:35:25.415: INFO: Waiting up to 5m0s for pod "pod-configmaps-1e8fac22-44e1-4564-9305-447f8c9d9dcf" in namespace "configmap-6731" to be "success or failure"
Jan  6 14:35:25.434: INFO: Pod "pod-configmaps-1e8fac22-44e1-4564-9305-447f8c9d9dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 18.37755ms
Jan  6 14:35:27.447: INFO: Pod "pod-configmaps-1e8fac22-44e1-4564-9305-447f8c9d9dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031260996s
Jan  6 14:35:29.464: INFO: Pod "pod-configmaps-1e8fac22-44e1-4564-9305-447f8c9d9dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049056981s
Jan  6 14:35:31.481: INFO: Pod "pod-configmaps-1e8fac22-44e1-4564-9305-447f8c9d9dcf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065593269s
Jan  6 14:35:33.498: INFO: Pod "pod-configmaps-1e8fac22-44e1-4564-9305-447f8c9d9dcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082397262s
STEP: Saw pod success
Jan  6 14:35:33.498: INFO: Pod "pod-configmaps-1e8fac22-44e1-4564-9305-447f8c9d9dcf" satisfied condition "success or failure"
Jan  6 14:35:33.501: INFO: Trying to get logs from node iruya-node pod pod-configmaps-1e8fac22-44e1-4564-9305-447f8c9d9dcf container configmap-volume-test: 
STEP: delete the pod
Jan  6 14:35:33.691: INFO: Waiting for pod pod-configmaps-1e8fac22-44e1-4564-9305-447f8c9d9dcf to disappear
Jan  6 14:35:33.704: INFO: Pod pod-configmaps-1e8fac22-44e1-4564-9305-447f8c9d9dcf no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:35:33.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6731" for this suite.
Jan  6 14:35:39.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:35:39.899: INFO: namespace configmap-6731 deletion completed in 6.187252734s

• [SLOW TEST:14.656 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:35:39.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:35:48.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-1154" for this suite.
Jan  6 14:36:40.123: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:36:40.254: INFO: namespace kubelet-test-1154 deletion completed in 52.158682228s

• [SLOW TEST:60.355 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:36:40.255: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-projected-all-test-volume-507524e2-5c68-4e75-978c-7045cb732f1c
STEP: Creating secret with name secret-projected-all-test-volume-19f19fca-5050-4c50-aff1-5baf3ee975af
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan  6 14:36:40.442: INFO: Waiting up to 5m0s for pod "projected-volume-da260bae-5e65-4b25-ab8a-fbe17d8917b4" in namespace "projected-6985" to be "success or failure"
Jan  6 14:36:40.583: INFO: Pod "projected-volume-da260bae-5e65-4b25-ab8a-fbe17d8917b4": Phase="Pending", Reason="", readiness=false. Elapsed: 140.734758ms
Jan  6 14:36:42.622: INFO: Pod "projected-volume-da260bae-5e65-4b25-ab8a-fbe17d8917b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.179758018s
Jan  6 14:36:44.632: INFO: Pod "projected-volume-da260bae-5e65-4b25-ab8a-fbe17d8917b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.190163114s
Jan  6 14:36:46.637: INFO: Pod "projected-volume-da260bae-5e65-4b25-ab8a-fbe17d8917b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.19550804s
Jan  6 14:36:48.659: INFO: Pod "projected-volume-da260bae-5e65-4b25-ab8a-fbe17d8917b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.217456343s
STEP: Saw pod success
Jan  6 14:36:48.660: INFO: Pod "projected-volume-da260bae-5e65-4b25-ab8a-fbe17d8917b4" satisfied condition "success or failure"
Jan  6 14:36:48.668: INFO: Trying to get logs from node iruya-node pod projected-volume-da260bae-5e65-4b25-ab8a-fbe17d8917b4 container projected-all-volume-test: 
STEP: delete the pod
Jan  6 14:36:48.924: INFO: Waiting for pod projected-volume-da260bae-5e65-4b25-ab8a-fbe17d8917b4 to disappear
Jan  6 14:36:48.929: INFO: Pod projected-volume-da260bae-5e65-4b25-ab8a-fbe17d8917b4 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:36:48.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6985" for this suite.
Jan  6 14:36:54.959: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:36:55.086: INFO: namespace projected-6985 deletion completed in 6.153146488s

• [SLOW TEST:14.831 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:36:55.087: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  6 14:36:55.230: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8be26b4b-592f-47ef-8f0a-046eebb6dc1a" in namespace "downward-api-1603" to be "success or failure"
Jan  6 14:36:55.321: INFO: Pod "downwardapi-volume-8be26b4b-592f-47ef-8f0a-046eebb6dc1a": Phase="Pending", Reason="", readiness=false. Elapsed: 91.391097ms
Jan  6 14:36:57.334: INFO: Pod "downwardapi-volume-8be26b4b-592f-47ef-8f0a-046eebb6dc1a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.104204575s
Jan  6 14:36:59.382: INFO: Pod "downwardapi-volume-8be26b4b-592f-47ef-8f0a-046eebb6dc1a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152022502s
Jan  6 14:37:01.391: INFO: Pod "downwardapi-volume-8be26b4b-592f-47ef-8f0a-046eebb6dc1a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.160984144s
Jan  6 14:37:03.400: INFO: Pod "downwardapi-volume-8be26b4b-592f-47ef-8f0a-046eebb6dc1a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.170179222s
STEP: Saw pod success
Jan  6 14:37:03.400: INFO: Pod "downwardapi-volume-8be26b4b-592f-47ef-8f0a-046eebb6dc1a" satisfied condition "success or failure"
Jan  6 14:37:03.404: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8be26b4b-592f-47ef-8f0a-046eebb6dc1a container client-container: 
STEP: delete the pod
Jan  6 14:37:03.516: INFO: Waiting for pod downwardapi-volume-8be26b4b-592f-47ef-8f0a-046eebb6dc1a to disappear
Jan  6 14:37:03.521: INFO: Pod downwardapi-volume-8be26b4b-592f-47ef-8f0a-046eebb6dc1a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:37:03.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1603" for this suite.
Jan  6 14:37:09.626: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:37:09.812: INFO: namespace downward-api-1603 deletion completed in 6.284871249s

• [SLOW TEST:14.726 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:37:09.813: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  6 14:37:09.911: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ddf74194-e9f3-4d70-863e-f9150eb93348" in namespace "projected-1461" to be "success or failure"
Jan  6 14:37:09.949: INFO: Pod "downwardapi-volume-ddf74194-e9f3-4d70-863e-f9150eb93348": Phase="Pending", Reason="", readiness=false. Elapsed: 37.954439ms
Jan  6 14:37:11.958: INFO: Pod "downwardapi-volume-ddf74194-e9f3-4d70-863e-f9150eb93348": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047360442s
Jan  6 14:37:13.966: INFO: Pod "downwardapi-volume-ddf74194-e9f3-4d70-863e-f9150eb93348": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054966232s
Jan  6 14:37:15.975: INFO: Pod "downwardapi-volume-ddf74194-e9f3-4d70-863e-f9150eb93348": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064242078s
Jan  6 14:37:17.988: INFO: Pod "downwardapi-volume-ddf74194-e9f3-4d70-863e-f9150eb93348": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077188916s
STEP: Saw pod success
Jan  6 14:37:17.988: INFO: Pod "downwardapi-volume-ddf74194-e9f3-4d70-863e-f9150eb93348" satisfied condition "success or failure"
Jan  6 14:37:17.992: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ddf74194-e9f3-4d70-863e-f9150eb93348 container client-container: 
STEP: delete the pod
Jan  6 14:37:18.052: INFO: Waiting for pod downwardapi-volume-ddf74194-e9f3-4d70-863e-f9150eb93348 to disappear
Jan  6 14:37:18.060: INFO: Pod downwardapi-volume-ddf74194-e9f3-4d70-863e-f9150eb93348 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:37:18.060: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1461" for this suite.
Jan  6 14:37:24.081: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:37:24.197: INFO: namespace projected-1461 deletion completed in 6.133008284s

• [SLOW TEST:14.385 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:37:24.198: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap configmap-5161/configmap-test-1ad77fba-697f-4bb0-93ec-b87f2c7213d9
STEP: Creating a pod to test consume configMaps
Jan  6 14:37:24.369: INFO: Waiting up to 5m0s for pod "pod-configmaps-c38c023e-cd8d-40e4-a5f3-eb133e137ec4" in namespace "configmap-5161" to be "success or failure"
Jan  6 14:37:24.391: INFO: Pod "pod-configmaps-c38c023e-cd8d-40e4-a5f3-eb133e137ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 21.903084ms
Jan  6 14:37:26.401: INFO: Pod "pod-configmaps-c38c023e-cd8d-40e4-a5f3-eb133e137ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031426243s
Jan  6 14:37:28.412: INFO: Pod "pod-configmaps-c38c023e-cd8d-40e4-a5f3-eb133e137ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04228259s
Jan  6 14:37:30.419: INFO: Pod "pod-configmaps-c38c023e-cd8d-40e4-a5f3-eb133e137ec4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049708478s
Jan  6 14:37:32.429: INFO: Pod "pod-configmaps-c38c023e-cd8d-40e4-a5f3-eb133e137ec4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059664135s
STEP: Saw pod success
Jan  6 14:37:32.429: INFO: Pod "pod-configmaps-c38c023e-cd8d-40e4-a5f3-eb133e137ec4" satisfied condition "success or failure"
Jan  6 14:37:32.432: INFO: Trying to get logs from node iruya-node pod pod-configmaps-c38c023e-cd8d-40e4-a5f3-eb133e137ec4 container env-test: 
STEP: delete the pod
Jan  6 14:37:32.506: INFO: Waiting for pod pod-configmaps-c38c023e-cd8d-40e4-a5f3-eb133e137ec4 to disappear
Jan  6 14:37:32.544: INFO: Pod pod-configmaps-c38c023e-cd8d-40e4-a5f3-eb133e137ec4 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:37:32.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5161" for this suite.
Jan  6 14:37:38.643: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:37:38.789: INFO: namespace configmap-5161 deletion completed in 6.185856576s

• [SLOW TEST:14.591 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:37:38.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-08a26c7a-6a05-4023-bc95-a20f9c8802fb
STEP: Creating a pod to test consume secrets
Jan  6 14:37:38.945: INFO: Waiting up to 5m0s for pod "pod-secrets-b9dfbfd4-2f91-49df-8491-38bad9cecb15" in namespace "secrets-3943" to be "success or failure"
Jan  6 14:37:38.959: INFO: Pod "pod-secrets-b9dfbfd4-2f91-49df-8491-38bad9cecb15": Phase="Pending", Reason="", readiness=false. Elapsed: 14.331514ms
Jan  6 14:37:40.969: INFO: Pod "pod-secrets-b9dfbfd4-2f91-49df-8491-38bad9cecb15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023941683s
Jan  6 14:37:42.977: INFO: Pod "pod-secrets-b9dfbfd4-2f91-49df-8491-38bad9cecb15": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032473253s
Jan  6 14:37:44.986: INFO: Pod "pod-secrets-b9dfbfd4-2f91-49df-8491-38bad9cecb15": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041571404s
Jan  6 14:37:47.002: INFO: Pod "pod-secrets-b9dfbfd4-2f91-49df-8491-38bad9cecb15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05709849s
STEP: Saw pod success
Jan  6 14:37:47.002: INFO: Pod "pod-secrets-b9dfbfd4-2f91-49df-8491-38bad9cecb15" satisfied condition "success or failure"
Jan  6 14:37:47.005: INFO: Trying to get logs from node iruya-node pod pod-secrets-b9dfbfd4-2f91-49df-8491-38bad9cecb15 container secret-volume-test: 
STEP: delete the pod
Jan  6 14:37:47.066: INFO: Waiting for pod pod-secrets-b9dfbfd4-2f91-49df-8491-38bad9cecb15 to disappear
Jan  6 14:37:47.073: INFO: Pod pod-secrets-b9dfbfd4-2f91-49df-8491-38bad9cecb15 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:37:47.073: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3943" for this suite.
Jan  6 14:37:53.101: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:37:53.230: INFO: namespace secrets-3943 deletion completed in 6.150324752s

• [SLOW TEST:14.441 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:33
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:37:53.231: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0106 14:37:54.039790       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  6 14:37:54.039: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:37:54.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2048" for this suite.
Jan  6 14:38:00.120: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:38:00.252: INFO: namespace gc-2048 deletion completed in 6.209727601s

• [SLOW TEST:7.021 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:38:00.252: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  6 14:38:00.389: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0dcaecdc-f755-497d-a4db-62d30e64541c" in namespace "projected-7079" to be "success or failure"
Jan  6 14:38:00.404: INFO: Pod "downwardapi-volume-0dcaecdc-f755-497d-a4db-62d30e64541c": Phase="Pending", Reason="", readiness=false. Elapsed: 14.734795ms
Jan  6 14:38:02.417: INFO: Pod "downwardapi-volume-0dcaecdc-f755-497d-a4db-62d30e64541c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028097192s
Jan  6 14:38:04.489: INFO: Pod "downwardapi-volume-0dcaecdc-f755-497d-a4db-62d30e64541c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100348338s
Jan  6 14:38:06.511: INFO: Pod "downwardapi-volume-0dcaecdc-f755-497d-a4db-62d30e64541c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122111558s
Jan  6 14:38:08.576: INFO: Pod "downwardapi-volume-0dcaecdc-f755-497d-a4db-62d30e64541c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.186777447s
STEP: Saw pod success
Jan  6 14:38:08.576: INFO: Pod "downwardapi-volume-0dcaecdc-f755-497d-a4db-62d30e64541c" satisfied condition "success or failure"
Jan  6 14:38:08.582: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-0dcaecdc-f755-497d-a4db-62d30e64541c container client-container: 
STEP: delete the pod
Jan  6 14:38:08.742: INFO: Waiting for pod downwardapi-volume-0dcaecdc-f755-497d-a4db-62d30e64541c to disappear
Jan  6 14:38:08.756: INFO: Pod downwardapi-volume-0dcaecdc-f755-497d-a4db-62d30e64541c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:38:08.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7079" for this suite.
Jan  6 14:38:14.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:38:14.964: INFO: namespace projected-7079 deletion completed in 6.20007163s

• [SLOW TEST:14.712 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:38:14.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating replication controller svc-latency-rc in namespace svc-latency-6124
I0106 14:38:15.090348       8 runners.go:180] Created replication controller with name: svc-latency-rc, namespace: svc-latency-6124, replica count: 1
I0106 14:38:16.141311       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0106 14:38:17.141809       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0106 14:38:18.142603       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0106 14:38:19.143069       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0106 14:38:20.143373       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0106 14:38:21.143649       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0106 14:38:22.144034       8 runners.go:180] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan  6 14:38:22.337: INFO: Created: latency-svc-5q2wn
Jan  6 14:38:22.348: INFO: Got endpoints: latency-svc-5q2wn [103.435022ms]
Jan  6 14:38:22.493: INFO: Created: latency-svc-b749l
Jan  6 14:38:22.525: INFO: Got endpoints: latency-svc-b749l [176.688861ms]
Jan  6 14:38:22.580: INFO: Created: latency-svc-zpgkg
Jan  6 14:38:22.649: INFO: Got endpoints: latency-svc-zpgkg [301.074728ms]
Jan  6 14:38:22.675: INFO: Created: latency-svc-bj6qv
Jan  6 14:38:22.684: INFO: Got endpoints: latency-svc-bj6qv [334.828071ms]
Jan  6 14:38:22.734: INFO: Created: latency-svc-fk58k
Jan  6 14:38:22.742: INFO: Got endpoints: latency-svc-fk58k [393.609469ms]
Jan  6 14:38:22.886: INFO: Created: latency-svc-2g2j5
Jan  6 14:38:22.902: INFO: Got endpoints: latency-svc-2g2j5 [553.259881ms]
Jan  6 14:38:23.036: INFO: Created: latency-svc-kdxtm
Jan  6 14:38:23.041: INFO: Got endpoints: latency-svc-kdxtm [692.075463ms]
Jan  6 14:38:23.121: INFO: Created: latency-svc-tgpdw
Jan  6 14:38:23.228: INFO: Got endpoints: latency-svc-tgpdw [878.759178ms]
Jan  6 14:38:23.276: INFO: Created: latency-svc-czpsr
Jan  6 14:38:23.299: INFO: Got endpoints: latency-svc-czpsr [950.426275ms]
Jan  6 14:38:23.411: INFO: Created: latency-svc-c747p
Jan  6 14:38:23.411: INFO: Got endpoints: latency-svc-c747p [1.062444143s]
Jan  6 14:38:23.458: INFO: Created: latency-svc-x2wfz
Jan  6 14:38:23.465: INFO: Got endpoints: latency-svc-x2wfz [1.116943145s]
Jan  6 14:38:23.561: INFO: Created: latency-svc-mwf79
Jan  6 14:38:23.570: INFO: Got endpoints: latency-svc-mwf79 [1.221199348s]
Jan  6 14:38:23.623: INFO: Created: latency-svc-vtbxg
Jan  6 14:38:23.630: INFO: Got endpoints: latency-svc-vtbxg [1.281128958s]
Jan  6 14:38:23.735: INFO: Created: latency-svc-n4xl7
Jan  6 14:38:23.738: INFO: Got endpoints: latency-svc-n4xl7 [1.38845046s]
Jan  6 14:38:23.825: INFO: Created: latency-svc-nkp2g
Jan  6 14:38:23.922: INFO: Got endpoints: latency-svc-nkp2g [1.573541149s]
Jan  6 14:38:23.974: INFO: Created: latency-svc-xxh9t
Jan  6 14:38:23.977: INFO: Got endpoints: latency-svc-xxh9t [1.627507981s]
Jan  6 14:38:24.107: INFO: Created: latency-svc-xqcvk
Jan  6 14:38:24.120: INFO: Got endpoints: latency-svc-xqcvk [1.594334836s]
Jan  6 14:38:24.177: INFO: Created: latency-svc-vpfx9
Jan  6 14:38:24.191: INFO: Got endpoints: latency-svc-vpfx9 [1.541862892s]
Jan  6 14:38:24.275: INFO: Created: latency-svc-6p65s
Jan  6 14:38:24.281: INFO: Got endpoints: latency-svc-6p65s [1.597669763s]
Jan  6 14:38:24.348: INFO: Created: latency-svc-d75ps
Jan  6 14:38:24.443: INFO: Got endpoints: latency-svc-d75ps [1.700730972s]
Jan  6 14:38:24.460: INFO: Created: latency-svc-k7p5h
Jan  6 14:38:24.512: INFO: Got endpoints: latency-svc-k7p5h [1.609374768s]
Jan  6 14:38:24.515: INFO: Created: latency-svc-dh4s8
Jan  6 14:38:24.541: INFO: Got endpoints: latency-svc-dh4s8 [1.499937563s]
Jan  6 14:38:24.635: INFO: Created: latency-svc-7zjwh
Jan  6 14:38:24.636: INFO: Got endpoints: latency-svc-7zjwh [1.40803137s]
Jan  6 14:38:24.694: INFO: Created: latency-svc-m28b6
Jan  6 14:38:24.712: INFO: Got endpoints: latency-svc-m28b6 [1.412826246s]
Jan  6 14:38:24.834: INFO: Created: latency-svc-sr55c
Jan  6 14:38:24.861: INFO: Got endpoints: latency-svc-sr55c [1.44991421s]
Jan  6 14:38:24.956: INFO: Created: latency-svc-c2rwn
Jan  6 14:38:24.961: INFO: Got endpoints: latency-svc-c2rwn [1.49587537s]
Jan  6 14:38:25.016: INFO: Created: latency-svc-wh97t
Jan  6 14:38:25.028: INFO: Got endpoints: latency-svc-wh97t [166.454311ms]
Jan  6 14:38:25.131: INFO: Created: latency-svc-6tlrn
Jan  6 14:38:25.137: INFO: Got endpoints: latency-svc-6tlrn [1.566970446s]
Jan  6 14:38:25.216: INFO: Created: latency-svc-8mv82
Jan  6 14:38:25.274: INFO: Got endpoints: latency-svc-8mv82 [1.643751619s]
Jan  6 14:38:25.313: INFO: Created: latency-svc-wv5bl
Jan  6 14:38:25.327: INFO: Got endpoints: latency-svc-wv5bl [1.589304959s]
Jan  6 14:38:25.439: INFO: Created: latency-svc-2jm7v
Jan  6 14:38:25.449: INFO: Got endpoints: latency-svc-2jm7v [1.52644959s]
Jan  6 14:38:25.504: INFO: Created: latency-svc-8xjd9
Jan  6 14:38:25.513: INFO: Got endpoints: latency-svc-8xjd9 [1.536045959s]
Jan  6 14:38:25.601: INFO: Created: latency-svc-9s8sq
Jan  6 14:38:25.608: INFO: Got endpoints: latency-svc-9s8sq [1.488771889s]
Jan  6 14:38:25.649: INFO: Created: latency-svc-nhdqf
Jan  6 14:38:25.653: INFO: Got endpoints: latency-svc-nhdqf [1.461172178s]
Jan  6 14:38:25.759: INFO: Created: latency-svc-k7lqw
Jan  6 14:38:25.772: INFO: Got endpoints: latency-svc-k7lqw [1.490635328s]
Jan  6 14:38:25.831: INFO: Created: latency-svc-klwbp
Jan  6 14:38:25.836: INFO: Got endpoints: latency-svc-klwbp [1.392535207s]
Jan  6 14:38:25.953: INFO: Created: latency-svc-5ggnz
Jan  6 14:38:25.960: INFO: Got endpoints: latency-svc-5ggnz [1.447578973s]
Jan  6 14:38:26.095: INFO: Created: latency-svc-tgl67
Jan  6 14:38:26.106: INFO: Got endpoints: latency-svc-tgl67 [1.564512055s]
Jan  6 14:38:26.195: INFO: Created: latency-svc-bjq59
Jan  6 14:38:26.313: INFO: Got endpoints: latency-svc-bjq59 [1.676364061s]
Jan  6 14:38:26.354: INFO: Created: latency-svc-gjz9l
Jan  6 14:38:26.354: INFO: Got endpoints: latency-svc-gjz9l [1.641173273s]
Jan  6 14:38:26.394: INFO: Created: latency-svc-d92dw
Jan  6 14:38:26.400: INFO: Got endpoints: latency-svc-d92dw [1.438996645s]
Jan  6 14:38:26.515: INFO: Created: latency-svc-pqg55
Jan  6 14:38:26.517: INFO: Got endpoints: latency-svc-pqg55 [1.488625842s]
Jan  6 14:38:26.637: INFO: Created: latency-svc-8kqr7
Jan  6 14:38:26.682: INFO: Created: latency-svc-6sh7w
Jan  6 14:38:26.682: INFO: Got endpoints: latency-svc-8kqr7 [1.544405535s]
Jan  6 14:38:26.704: INFO: Got endpoints: latency-svc-6sh7w [1.429459885s]
Jan  6 14:38:27.008: INFO: Created: latency-svc-mgcjz
Jan  6 14:38:27.066: INFO: Got endpoints: latency-svc-mgcjz [1.739259724s]
Jan  6 14:38:27.224: INFO: Created: latency-svc-w5r5f
Jan  6 14:38:27.380: INFO: Got endpoints: latency-svc-w5r5f [1.930279788s]
Jan  6 14:38:27.380: INFO: Created: latency-svc-g5dst
Jan  6 14:38:27.403: INFO: Got endpoints: latency-svc-g5dst [1.890046253s]
Jan  6 14:38:27.474: INFO: Created: latency-svc-k4jhq
Jan  6 14:38:27.576: INFO: Got endpoints: latency-svc-k4jhq [1.967699153s]
Jan  6 14:38:27.583: INFO: Created: latency-svc-p2vxt
Jan  6 14:38:27.634: INFO: Got endpoints: latency-svc-p2vxt [1.981091639s]
Jan  6 14:38:27.730: INFO: Created: latency-svc-xn2d8
Jan  6 14:38:27.746: INFO: Got endpoints: latency-svc-xn2d8 [1.973640247s]
Jan  6 14:38:27.799: INFO: Created: latency-svc-tcf6j
Jan  6 14:38:27.807: INFO: Got endpoints: latency-svc-tcf6j [1.970402413s]
Jan  6 14:38:27.913: INFO: Created: latency-svc-kd2bd
Jan  6 14:38:27.928: INFO: Got endpoints: latency-svc-kd2bd [1.968668607s]
Jan  6 14:38:28.048: INFO: Created: latency-svc-4hdkm
Jan  6 14:38:28.064: INFO: Got endpoints: latency-svc-4hdkm [1.957996966s]
Jan  6 14:38:28.107: INFO: Created: latency-svc-fjmn2
Jan  6 14:38:28.119: INFO: Got endpoints: latency-svc-fjmn2 [1.806236658s]
Jan  6 14:38:28.216: INFO: Created: latency-svc-hznwk
Jan  6 14:38:28.217: INFO: Got endpoints: latency-svc-hznwk [1.862985422s]
Jan  6 14:38:28.278: INFO: Created: latency-svc-pdwrc
Jan  6 14:38:28.285: INFO: Got endpoints: latency-svc-pdwrc [1.884251448s]
Jan  6 14:38:28.375: INFO: Created: latency-svc-w2668
Jan  6 14:38:28.405: INFO: Got endpoints: latency-svc-w2668 [1.888132888s]
Jan  6 14:38:28.464: INFO: Created: latency-svc-fk9w7
Jan  6 14:38:28.541: INFO: Got endpoints: latency-svc-fk9w7 [1.858332672s]
Jan  6 14:38:28.602: INFO: Created: latency-svc-npdfj
Jan  6 14:38:28.621: INFO: Got endpoints: latency-svc-npdfj [1.917119628s]
Jan  6 14:38:28.743: INFO: Created: latency-svc-trns6
Jan  6 14:38:28.758: INFO: Got endpoints: latency-svc-trns6 [1.690742696s]
Jan  6 14:38:28.861: INFO: Created: latency-svc-6ztks
Jan  6 14:38:28.883: INFO: Got endpoints: latency-svc-6ztks [1.503760261s]
Jan  6 14:38:28.934: INFO: Created: latency-svc-tmtch
Jan  6 14:38:29.060: INFO: Got endpoints: latency-svc-tmtch [1.65609598s]
Jan  6 14:38:29.098: INFO: Created: latency-svc-dfgr4
Jan  6 14:38:29.101: INFO: Got endpoints: latency-svc-dfgr4 [1.524451113s]
Jan  6 14:38:29.154: INFO: Created: latency-svc-85f9g
Jan  6 14:38:29.232: INFO: Got endpoints: latency-svc-85f9g [1.598091698s]
Jan  6 14:38:29.292: INFO: Created: latency-svc-lt795
Jan  6 14:38:29.298: INFO: Got endpoints: latency-svc-lt795 [1.55142213s]
Jan  6 14:38:29.454: INFO: Created: latency-svc-2czxx
Jan  6 14:38:29.461: INFO: Got endpoints: latency-svc-2czxx [1.654120648s]
Jan  6 14:38:29.655: INFO: Created: latency-svc-krxdq
Jan  6 14:38:29.674: INFO: Got endpoints: latency-svc-krxdq [1.744571917s]
Jan  6 14:38:29.749: INFO: Created: latency-svc-nfccn
Jan  6 14:38:29.796: INFO: Got endpoints: latency-svc-nfccn [1.731897101s]
Jan  6 14:38:29.835: INFO: Created: latency-svc-fhs9f
Jan  6 14:38:29.841: INFO: Got endpoints: latency-svc-fhs9f [1.722219168s]
Jan  6 14:38:29.895: INFO: Created: latency-svc-tw6dl
Jan  6 14:38:29.965: INFO: Got endpoints: latency-svc-tw6dl [1.748331278s]
Jan  6 14:38:29.992: INFO: Created: latency-svc-46v7b
Jan  6 14:38:29.998: INFO: Got endpoints: latency-svc-46v7b [1.71342302s]
Jan  6 14:38:30.045: INFO: Created: latency-svc-2c72v
Jan  6 14:38:30.126: INFO: Got endpoints: latency-svc-2c72v [1.721103165s]
Jan  6 14:38:30.130: INFO: Created: latency-svc-r2hhq
Jan  6 14:38:30.136: INFO: Got endpoints: latency-svc-r2hhq [1.595035779s]
Jan  6 14:38:30.191: INFO: Created: latency-svc-qdmbx
Jan  6 14:38:30.205: INFO: Got endpoints: latency-svc-qdmbx [1.583941108s]
Jan  6 14:38:30.387: INFO: Created: latency-svc-2mmwf
Jan  6 14:38:30.397: INFO: Got endpoints: latency-svc-2mmwf [1.638500103s]
Jan  6 14:38:30.456: INFO: Created: latency-svc-8qglk
Jan  6 14:38:30.472: INFO: Got endpoints: latency-svc-8qglk [1.588471113s]
Jan  6 14:38:30.574: INFO: Created: latency-svc-b55hp
Jan  6 14:38:30.580: INFO: Got endpoints: latency-svc-b55hp [1.519480192s]
Jan  6 14:38:30.626: INFO: Created: latency-svc-z94mf
Jan  6 14:38:30.637: INFO: Got endpoints: latency-svc-z94mf [1.536259243s]
Jan  6 14:38:30.724: INFO: Created: latency-svc-w5f7x
Jan  6 14:38:30.724: INFO: Got endpoints: latency-svc-w5f7x [1.491763572s]
Jan  6 14:38:30.772: INFO: Created: latency-svc-7b4zt
Jan  6 14:38:30.776: INFO: Got endpoints: latency-svc-7b4zt [1.477889354s]
Jan  6 14:38:30.877: INFO: Created: latency-svc-7fzgh
Jan  6 14:38:30.884: INFO: Got endpoints: latency-svc-7fzgh [1.423531439s]
Jan  6 14:38:30.944: INFO: Created: latency-svc-n2rpc
Jan  6 14:38:30.945: INFO: Got endpoints: latency-svc-n2rpc [1.271360723s]
Jan  6 14:38:31.096: INFO: Created: latency-svc-wfpfw
Jan  6 14:38:31.109: INFO: Got endpoints: latency-svc-wfpfw [1.31275767s]
Jan  6 14:38:31.161: INFO: Created: latency-svc-wg8jg
Jan  6 14:38:31.164: INFO: Got endpoints: latency-svc-wg8jg [1.322748424s]
Jan  6 14:38:31.372: INFO: Created: latency-svc-ghzzb
Jan  6 14:38:31.475: INFO: Got endpoints: latency-svc-ghzzb [1.509582969s]
Jan  6 14:38:31.521: INFO: Created: latency-svc-chbwq
Jan  6 14:38:31.534: INFO: Got endpoints: latency-svc-chbwq [1.535974258s]
Jan  6 14:38:31.625: INFO: Created: latency-svc-9jvgd
Jan  6 14:38:31.629: INFO: Got endpoints: latency-svc-9jvgd [1.502688695s]
Jan  6 14:38:31.697: INFO: Created: latency-svc-k8pln
Jan  6 14:38:31.714: INFO: Got endpoints: latency-svc-k8pln [1.577042345s]
Jan  6 14:38:31.825: INFO: Created: latency-svc-hnfk4
Jan  6 14:38:31.833: INFO: Got endpoints: latency-svc-hnfk4 [1.626992615s]
Jan  6 14:38:31.866: INFO: Created: latency-svc-zq9mw
Jan  6 14:38:31.880: INFO: Got endpoints: latency-svc-zq9mw [1.482391738s]
Jan  6 14:38:32.082: INFO: Created: latency-svc-kc2hh
Jan  6 14:38:32.116: INFO: Got endpoints: latency-svc-kc2hh [1.643531362s]
Jan  6 14:38:32.177: INFO: Created: latency-svc-j9qll
Jan  6 14:38:32.270: INFO: Got endpoints: latency-svc-j9qll [1.689469481s]
Jan  6 14:38:32.315: INFO: Created: latency-svc-5gzm4
Jan  6 14:38:32.327: INFO: Got endpoints: latency-svc-5gzm4 [1.690212836s]
Jan  6 14:38:32.481: INFO: Created: latency-svc-2pw7g
Jan  6 14:38:32.494: INFO: Got endpoints: latency-svc-2pw7g [1.770118245s]
Jan  6 14:38:32.677: INFO: Created: latency-svc-gmbk4
Jan  6 14:38:32.740: INFO: Created: latency-svc-fsfk8
Jan  6 14:38:32.740: INFO: Got endpoints: latency-svc-gmbk4 [1.964481151s]
Jan  6 14:38:32.760: INFO: Got endpoints: latency-svc-fsfk8 [1.875468386s]
Jan  6 14:38:32.856: INFO: Created: latency-svc-b5rmk
Jan  6 14:38:32.924: INFO: Created: latency-svc-5j5pq
Jan  6 14:38:32.925: INFO: Got endpoints: latency-svc-b5rmk [1.97922715s]
Jan  6 14:38:32.934: INFO: Got endpoints: latency-svc-5j5pq [1.824578835s]
Jan  6 14:38:33.101: INFO: Created: latency-svc-rv4k5
Jan  6 14:38:33.119: INFO: Got endpoints: latency-svc-rv4k5 [1.954437866s]
Jan  6 14:38:33.194: INFO: Created: latency-svc-8px9k
Jan  6 14:38:33.263: INFO: Got endpoints: latency-svc-8px9k [1.787392478s]
Jan  6 14:38:33.289: INFO: Created: latency-svc-dv5ls
Jan  6 14:38:33.298: INFO: Got endpoints: latency-svc-dv5ls [1.764078119s]
Jan  6 14:38:33.489: INFO: Created: latency-svc-wrjsr
Jan  6 14:38:33.518: INFO: Got endpoints: latency-svc-wrjsr [1.888209529s]
Jan  6 14:38:33.560: INFO: Created: latency-svc-mlc47
Jan  6 14:38:33.563: INFO: Got endpoints: latency-svc-mlc47 [1.849706485s]
Jan  6 14:38:33.667: INFO: Created: latency-svc-njkzs
Jan  6 14:38:33.681: INFO: Got endpoints: latency-svc-njkzs [1.847729066s]
Jan  6 14:38:33.735: INFO: Created: latency-svc-2q9qk
Jan  6 14:38:33.742: INFO: Got endpoints: latency-svc-2q9qk [1.862035401s]
Jan  6 14:38:33.874: INFO: Created: latency-svc-jl9dp
Jan  6 14:38:33.932: INFO: Got endpoints: latency-svc-jl9dp [1.815465543s]
Jan  6 14:38:33.950: INFO: Created: latency-svc-nx9k6
Jan  6 14:38:34.107: INFO: Got endpoints: latency-svc-nx9k6 [1.837635896s]
Jan  6 14:38:34.108: INFO: Created: latency-svc-8hww4
Jan  6 14:38:34.121: INFO: Got endpoints: latency-svc-8hww4 [1.793585608s]
Jan  6 14:38:34.179: INFO: Created: latency-svc-c7lkq
Jan  6 14:38:34.186: INFO: Got endpoints: latency-svc-c7lkq [1.690555718s]
Jan  6 14:38:34.388: INFO: Created: latency-svc-s4hg9
Jan  6 14:38:34.399: INFO: Got endpoints: latency-svc-s4hg9 [1.658955158s]
Jan  6 14:38:34.590: INFO: Created: latency-svc-qg6xz
Jan  6 14:38:34.603: INFO: Got endpoints: latency-svc-qg6xz [1.843129612s]
Jan  6 14:38:34.668: INFO: Created: latency-svc-jvmnr
Jan  6 14:38:34.790: INFO: Got endpoints: latency-svc-jvmnr [1.865544596s]
Jan  6 14:38:34.824: INFO: Created: latency-svc-sjptr
Jan  6 14:38:34.825: INFO: Got endpoints: latency-svc-sjptr [1.890814288s]
Jan  6 14:38:34.898: INFO: Created: latency-svc-j77bs
Jan  6 14:38:35.095: INFO: Got endpoints: latency-svc-j77bs [1.976065726s]
Jan  6 14:38:35.107: INFO: Created: latency-svc-4bsxp
Jan  6 14:38:35.120: INFO: Got endpoints: latency-svc-4bsxp [1.857299277s]
Jan  6 14:38:35.328: INFO: Created: latency-svc-7kwkx
Jan  6 14:38:35.359: INFO: Got endpoints: latency-svc-7kwkx [2.060188024s]
Jan  6 14:38:35.422: INFO: Created: latency-svc-pm7xv
Jan  6 14:38:35.557: INFO: Got endpoints: latency-svc-pm7xv [2.039015484s]
Jan  6 14:38:35.570: INFO: Created: latency-svc-hpc9t
Jan  6 14:38:35.586: INFO: Got endpoints: latency-svc-hpc9t [2.022050219s]
Jan  6 14:38:35.758: INFO: Created: latency-svc-dnscl
Jan  6 14:38:35.769: INFO: Got endpoints: latency-svc-dnscl [2.088342948s]
Jan  6 14:38:35.831: INFO: Created: latency-svc-jg6k5
Jan  6 14:38:35.835: INFO: Got endpoints: latency-svc-jg6k5 [2.092824831s]
Jan  6 14:38:36.046: INFO: Created: latency-svc-926gv
Jan  6 14:38:36.240: INFO: Got endpoints: latency-svc-926gv [2.308324123s]
Jan  6 14:38:36.248: INFO: Created: latency-svc-kkr65
Jan  6 14:38:36.253: INFO: Got endpoints: latency-svc-kkr65 [2.144744178s]
Jan  6 14:38:36.466: INFO: Created: latency-svc-p5t9s
Jan  6 14:38:36.474: INFO: Got endpoints: latency-svc-p5t9s [2.352913844s]
Jan  6 14:38:36.544: INFO: Created: latency-svc-2q5d8
Jan  6 14:38:36.631: INFO: Got endpoints: latency-svc-2q5d8 [2.444885417s]
Jan  6 14:38:36.661: INFO: Created: latency-svc-n9rzj
Jan  6 14:38:36.687: INFO: Got endpoints: latency-svc-n9rzj [2.288215715s]
Jan  6 14:38:36.791: INFO: Created: latency-svc-mx2vm
Jan  6 14:38:36.840: INFO: Got endpoints: latency-svc-mx2vm [2.236589527s]
Jan  6 14:38:36.845: INFO: Created: latency-svc-5xhp5
Jan  6 14:38:36.852: INFO: Got endpoints: latency-svc-5xhp5 [2.061139743s]
Jan  6 14:38:36.941: INFO: Created: latency-svc-zr2f8
Jan  6 14:38:36.978: INFO: Got endpoints: latency-svc-zr2f8 [2.152739451s]
Jan  6 14:38:36.980: INFO: Created: latency-svc-ph5r6
Jan  6 14:38:36.983: INFO: Got endpoints: latency-svc-ph5r6 [1.887393673s]
Jan  6 14:38:37.127: INFO: Created: latency-svc-7lcr6
Jan  6 14:38:37.135: INFO: Got endpoints: latency-svc-7lcr6 [2.014579399s]
Jan  6 14:38:37.181: INFO: Created: latency-svc-dnbhc
Jan  6 14:38:37.196: INFO: Got endpoints: latency-svc-dnbhc [1.837471574s]
Jan  6 14:38:37.396: INFO: Created: latency-svc-trgsv
Jan  6 14:38:37.400: INFO: Got endpoints: latency-svc-trgsv [1.843466071s]
Jan  6 14:38:37.484: INFO: Created: latency-svc-kzklr
Jan  6 14:38:37.635: INFO: Got endpoints: latency-svc-kzklr [2.04904898s]
Jan  6 14:38:37.774: INFO: Created: latency-svc-gjnd8
Jan  6 14:38:37.837: INFO: Got endpoints: latency-svc-gjnd8 [2.067053666s]
Jan  6 14:38:37.845: INFO: Created: latency-svc-5ljgt
Jan  6 14:38:37.866: INFO: Got endpoints: latency-svc-5ljgt [2.030718453s]
Jan  6 14:38:37.949: INFO: Created: latency-svc-4jrtt
Jan  6 14:38:37.992: INFO: Got endpoints: latency-svc-4jrtt [1.750978832s]
Jan  6 14:38:37.995: INFO: Created: latency-svc-nktgs
Jan  6 14:38:38.033: INFO: Created: latency-svc-z9pzm
Jan  6 14:38:38.033: INFO: Got endpoints: latency-svc-nktgs [1.779875048s]
Jan  6 14:38:38.161: INFO: Got endpoints: latency-svc-z9pzm [1.686398107s]
Jan  6 14:38:38.175: INFO: Created: latency-svc-whj9s
Jan  6 14:38:38.183: INFO: Got endpoints: latency-svc-whj9s [1.552318004s]
Jan  6 14:38:38.235: INFO: Created: latency-svc-5mstn
Jan  6 14:38:38.255: INFO: Got endpoints: latency-svc-5mstn [1.566968672s]
Jan  6 14:38:38.365: INFO: Created: latency-svc-8zwm4
Jan  6 14:38:38.372: INFO: Got endpoints: latency-svc-8zwm4 [1.531145555s]
Jan  6 14:38:38.426: INFO: Created: latency-svc-hl4gs
Jan  6 14:38:38.432: INFO: Got endpoints: latency-svc-hl4gs [1.580764895s]
Jan  6 14:38:38.686: INFO: Created: latency-svc-dxbk9
Jan  6 14:38:38.691: INFO: Got endpoints: latency-svc-dxbk9 [1.712829431s]
Jan  6 14:38:38.745: INFO: Created: latency-svc-m65vt
Jan  6 14:38:38.814: INFO: Got endpoints: latency-svc-m65vt [1.830931251s]
Jan  6 14:38:38.866: INFO: Created: latency-svc-wqbfp
Jan  6 14:38:38.916: INFO: Created: latency-svc-nwnhl
Jan  6 14:38:38.985: INFO: Got endpoints: latency-svc-wqbfp [1.849741086s]
Jan  6 14:38:39.017: INFO: Got endpoints: latency-svc-nwnhl [1.820160559s]
Jan  6 14:38:39.024: INFO: Created: latency-svc-w84r7
Jan  6 14:38:39.033: INFO: Got endpoints: latency-svc-w84r7 [1.632793869s]
Jan  6 14:38:39.169: INFO: Created: latency-svc-jch84
Jan  6 14:38:39.172: INFO: Got endpoints: latency-svc-jch84 [1.53688224s]
Jan  6 14:38:39.257: INFO: Created: latency-svc-jlqp4
Jan  6 14:38:39.264: INFO: Got endpoints: latency-svc-jlqp4 [1.427133374s]
Jan  6 14:38:39.367: INFO: Created: latency-svc-x8fbv
Jan  6 14:38:39.388: INFO: Got endpoints: latency-svc-x8fbv [1.521944042s]
Jan  6 14:38:39.435: INFO: Created: latency-svc-xchp9
Jan  6 14:38:39.570: INFO: Got endpoints: latency-svc-xchp9 [1.578387696s]
Jan  6 14:38:39.596: INFO: Created: latency-svc-7dcz4
Jan  6 14:38:39.604: INFO: Got endpoints: latency-svc-7dcz4 [1.570911507s]
Jan  6 14:38:39.629: INFO: Created: latency-svc-qqv6n
Jan  6 14:38:39.642: INFO: Got endpoints: latency-svc-qqv6n [1.480640986s]
Jan  6 14:38:39.727: INFO: Created: latency-svc-566rc
Jan  6 14:38:39.733: INFO: Got endpoints: latency-svc-566rc [1.549114883s]
Jan  6 14:38:39.817: INFO: Created: latency-svc-nsbrt
Jan  6 14:38:39.930: INFO: Got endpoints: latency-svc-nsbrt [1.675142021s]
Jan  6 14:38:39.950: INFO: Created: latency-svc-lfrkr
Jan  6 14:38:39.964: INFO: Got endpoints: latency-svc-lfrkr [1.592531849s]
Jan  6 14:38:40.004: INFO: Created: latency-svc-xbcth
Jan  6 14:38:40.012: INFO: Got endpoints: latency-svc-xbcth [1.579405455s]
Jan  6 14:38:40.134: INFO: Created: latency-svc-mhtnw
Jan  6 14:38:40.140: INFO: Got endpoints: latency-svc-mhtnw [1.44949413s]
Jan  6 14:38:40.296: INFO: Created: latency-svc-zjtgq
Jan  6 14:38:40.367: INFO: Created: latency-svc-kmxxt
Jan  6 14:38:40.368: INFO: Got endpoints: latency-svc-zjtgq [1.553540013s]
Jan  6 14:38:40.438: INFO: Got endpoints: latency-svc-kmxxt [1.453139469s]
Jan  6 14:38:40.503: INFO: Created: latency-svc-rsgnk
Jan  6 14:38:40.504: INFO: Got endpoints: latency-svc-rsgnk [1.487136362s]
Jan  6 14:38:40.633: INFO: Created: latency-svc-t7g7k
Jan  6 14:38:40.638: INFO: Got endpoints: latency-svc-t7g7k [1.604084456s]
Jan  6 14:38:40.681: INFO: Created: latency-svc-hk6rh
Jan  6 14:38:40.688: INFO: Got endpoints: latency-svc-hk6rh [1.516167082s]
Jan  6 14:38:40.774: INFO: Created: latency-svc-r6tq7
Jan  6 14:38:40.783: INFO: Got endpoints: latency-svc-r6tq7 [1.519113949s]
Jan  6 14:38:40.848: INFO: Created: latency-svc-wdggs
Jan  6 14:38:40.866: INFO: Got endpoints: latency-svc-wdggs [1.477577489s]
Jan  6 14:38:40.975: INFO: Created: latency-svc-74qwd
Jan  6 14:38:40.982: INFO: Got endpoints: latency-svc-74qwd [1.411949048s]
Jan  6 14:38:41.037: INFO: Created: latency-svc-qzxwc
Jan  6 14:38:41.045: INFO: Got endpoints: latency-svc-qzxwc [1.440773575s]
Jan  6 14:38:41.176: INFO: Created: latency-svc-2qnj6
Jan  6 14:38:41.176: INFO: Got endpoints: latency-svc-2qnj6 [1.533748486s]
Jan  6 14:38:41.209: INFO: Created: latency-svc-4m6tj
Jan  6 14:38:41.365: INFO: Got endpoints: latency-svc-4m6tj [1.632169818s]
Jan  6 14:38:41.411: INFO: Created: latency-svc-5zp7z
Jan  6 14:38:41.416: INFO: Got endpoints: latency-svc-5zp7z [1.485841442s]
Jan  6 14:38:41.567: INFO: Created: latency-svc-fdj6p
Jan  6 14:38:41.575: INFO: Got endpoints: latency-svc-fdj6p [1.611183527s]
Jan  6 14:38:41.619: INFO: Created: latency-svc-ptwrq
Jan  6 14:38:41.626: INFO: Got endpoints: latency-svc-ptwrq [1.614343068s]
Jan  6 14:38:41.719: INFO: Created: latency-svc-npzsc
Jan  6 14:38:41.736: INFO: Got endpoints: latency-svc-npzsc [1.595339296s]
Jan  6 14:38:41.795: INFO: Created: latency-svc-9s2kl
Jan  6 14:38:41.876: INFO: Got endpoints: latency-svc-9s2kl [1.507968006s]
Jan  6 14:38:41.918: INFO: Created: latency-svc-ckbvv
Jan  6 14:38:41.918: INFO: Got endpoints: latency-svc-ckbvv [1.479068535s]
Jan  6 14:38:41.962: INFO: Created: latency-svc-j4hfg
Jan  6 14:38:42.034: INFO: Created: latency-svc-hqzkn
Jan  6 14:38:42.036: INFO: Got endpoints: latency-svc-j4hfg [1.531821457s]
Jan  6 14:38:42.053: INFO: Got endpoints: latency-svc-hqzkn [1.415262375s]
Jan  6 14:38:42.109: INFO: Created: latency-svc-wbpn2
Jan  6 14:38:42.113: INFO: Got endpoints: latency-svc-wbpn2 [1.424699232s]
Jan  6 14:38:42.220: INFO: Created: latency-svc-nh2rk
Jan  6 14:38:42.230: INFO: Got endpoints: latency-svc-nh2rk [1.446496966s]
Jan  6 14:38:42.284: INFO: Created: latency-svc-zgp98
Jan  6 14:38:42.371: INFO: Got endpoints: latency-svc-zgp98 [1.505292783s]
Jan  6 14:38:42.402: INFO: Created: latency-svc-hjhj2
Jan  6 14:38:42.415: INFO: Got endpoints: latency-svc-hjhj2 [1.432886888s]
Jan  6 14:38:42.619: INFO: Created: latency-svc-vqsmb
Jan  6 14:38:42.683: INFO: Got endpoints: latency-svc-vqsmb [1.637963976s]
Jan  6 14:38:42.711: INFO: Created: latency-svc-rt4gw
Jan  6 14:38:42.779: INFO: Got endpoints: latency-svc-rt4gw [1.602766106s]
Jan  6 14:38:42.822: INFO: Created: latency-svc-7rz52
Jan  6 14:38:42.836: INFO: Got endpoints: latency-svc-7rz52 [1.470506695s]
Jan  6 14:38:42.957: INFO: Created: latency-svc-f9p8j
Jan  6 14:38:43.021: INFO: Got endpoints: latency-svc-f9p8j [1.605130971s]
Jan  6 14:38:43.024: INFO: Created: latency-svc-45cdd
Jan  6 14:38:43.037: INFO: Got endpoints: latency-svc-45cdd [1.46103808s]
Jan  6 14:38:43.147: INFO: Created: latency-svc-82dhz
Jan  6 14:38:43.374: INFO: Got endpoints: latency-svc-82dhz [1.747549768s]
Jan  6 14:38:43.375: INFO: Created: latency-svc-4b884
Jan  6 14:38:43.406: INFO: Got endpoints: latency-svc-4b884 [1.669830916s]
Jan  6 14:38:43.457: INFO: Created: latency-svc-gp2n9
Jan  6 14:38:43.466: INFO: Got endpoints: latency-svc-gp2n9 [1.590303691s]
Jan  6 14:38:43.599: INFO: Created: latency-svc-mnsph
Jan  6 14:38:43.618: INFO: Got endpoints: latency-svc-mnsph [1.700547382s]
Jan  6 14:38:43.685: INFO: Created: latency-svc-47rbk
Jan  6 14:38:43.777: INFO: Got endpoints: latency-svc-47rbk [1.741321909s]
Jan  6 14:38:43.833: INFO: Created: latency-svc-c6rx6
Jan  6 14:38:43.851: INFO: Got endpoints: latency-svc-c6rx6 [1.797811427s]
Jan  6 14:38:43.970: INFO: Created: latency-svc-zrk2l
Jan  6 14:38:43.991: INFO: Got endpoints: latency-svc-zrk2l [1.877612805s]
Jan  6 14:38:44.055: INFO: Created: latency-svc-k2slg
Jan  6 14:38:44.064: INFO: Got endpoints: latency-svc-k2slg [1.834611115s]
Jan  6 14:38:44.260: INFO: Created: latency-svc-bznbx
Jan  6 14:38:44.272: INFO: Got endpoints: latency-svc-bznbx [1.90133631s]
Jan  6 14:38:44.349: INFO: Created: latency-svc-npsmq
Jan  6 14:38:44.481: INFO: Got endpoints: latency-svc-npsmq [2.06489938s]
Jan  6 14:38:44.539: INFO: Created: latency-svc-m765f
Jan  6 14:38:44.578: INFO: Got endpoints: latency-svc-m765f [1.894246219s]
Jan  6 14:38:44.750: INFO: Created: latency-svc-rqnvc
Jan  6 14:38:44.751: INFO: Got endpoints: latency-svc-rqnvc [1.97185608s]
Jan  6 14:38:44.788: INFO: Created: latency-svc-x6ljt
Jan  6 14:38:44.802: INFO: Got endpoints: latency-svc-x6ljt [1.9659408s]
Jan  6 14:38:44.955: INFO: Created: latency-svc-zll5q
Jan  6 14:38:44.962: INFO: Got endpoints: latency-svc-zll5q [1.940635662s]
Jan  6 14:38:45.000: INFO: Created: latency-svc-ktc74
Jan  6 14:38:45.008: INFO: Got endpoints: latency-svc-ktc74 [1.97181032s]
Jan  6 14:38:45.009: INFO: Latencies: [166.454311ms 176.688861ms 301.074728ms 334.828071ms 393.609469ms 553.259881ms 692.075463ms 878.759178ms 950.426275ms 1.062444143s 1.116943145s 1.221199348s 1.271360723s 1.281128958s 1.31275767s 1.322748424s 1.38845046s 1.392535207s 1.40803137s 1.411949048s 1.412826246s 1.415262375s 1.423531439s 1.424699232s 1.427133374s 1.429459885s 1.432886888s 1.438996645s 1.440773575s 1.446496966s 1.447578973s 1.44949413s 1.44991421s 1.453139469s 1.46103808s 1.461172178s 1.470506695s 1.477577489s 1.477889354s 1.479068535s 1.480640986s 1.482391738s 1.485841442s 1.487136362s 1.488625842s 1.488771889s 1.490635328s 1.491763572s 1.49587537s 1.499937563s 1.502688695s 1.503760261s 1.505292783s 1.507968006s 1.509582969s 1.516167082s 1.519113949s 1.519480192s 1.521944042s 1.524451113s 1.52644959s 1.531145555s 1.531821457s 1.533748486s 1.535974258s 1.536045959s 1.536259243s 1.53688224s 1.541862892s 1.544405535s 1.549114883s 1.55142213s 1.552318004s 1.553540013s 1.564512055s 1.566968672s 1.566970446s 1.570911507s 1.573541149s 1.577042345s 1.578387696s 1.579405455s 1.580764895s 1.583941108s 1.588471113s 1.589304959s 1.590303691s 1.592531849s 1.594334836s 1.595035779s 1.595339296s 1.597669763s 1.598091698s 1.602766106s 1.604084456s 1.605130971s 1.609374768s 1.611183527s 1.614343068s 1.626992615s 1.627507981s 1.632169818s 1.632793869s 1.637963976s 1.638500103s 1.641173273s 1.643531362s 1.643751619s 1.654120648s 1.65609598s 1.658955158s 1.669830916s 1.675142021s 1.676364061s 1.686398107s 1.689469481s 1.690212836s 1.690555718s 1.690742696s 1.700547382s 1.700730972s 1.712829431s 1.71342302s 1.721103165s 1.722219168s 1.731897101s 1.739259724s 1.741321909s 1.744571917s 1.747549768s 1.748331278s 1.750978832s 1.764078119s 1.770118245s 1.779875048s 1.787392478s 1.793585608s 1.797811427s 1.806236658s 1.815465543s 1.820160559s 1.824578835s 1.830931251s 1.834611115s 1.837471574s 1.837635896s 1.843129612s 1.843466071s 1.847729066s 1.849706485s 1.849741086s 1.857299277s 1.858332672s 1.862035401s 1.862985422s 1.865544596s 1.875468386s 1.877612805s 1.884251448s 1.887393673s 1.888132888s 1.888209529s 1.890046253s 1.890814288s 1.894246219s 1.90133631s 1.917119628s 1.930279788s 1.940635662s 1.954437866s 1.957996966s 1.964481151s 1.9659408s 1.967699153s 1.968668607s 1.970402413s 1.97181032s 1.97185608s 1.973640247s 1.976065726s 1.97922715s 1.981091639s 2.014579399s 2.022050219s 2.030718453s 2.039015484s 2.04904898s 2.060188024s 2.061139743s 2.06489938s 2.067053666s 2.088342948s 2.092824831s 2.144744178s 2.152739451s 2.236589527s 2.288215715s 2.308324123s 2.352913844s 2.444885417s]
Jan  6 14:38:45.009: INFO: 50 %ile: 1.627507981s
Jan  6 14:38:45.009: INFO: 90 %ile: 1.97922715s
Jan  6 14:38:45.009: INFO: 99 %ile: 2.352913844s
Jan  6 14:38:45.009: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:38:45.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-6124" for this suite.
Jan  6 14:39:21.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:39:21.224: INFO: namespace svc-latency-6124 deletion completed in 36.198333549s

• [SLOW TEST:66.260 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:39:21.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating pod
Jan  6 14:39:29.462: INFO: Pod pod-hostip-3e641604-bd50-402a-94e0-06daab5ebb07 has hostIP: 10.96.3.65
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:39:29.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2053" for this suite.
Jan  6 14:39:51.647: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:39:51.795: INFO: namespace pods-2053 deletion completed in 22.328935675s

• [SLOW TEST:30.571 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:39:51.796: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test env composition
Jan  6 14:39:51.977: INFO: Waiting up to 5m0s for pod "var-expansion-27247351-f78f-441c-b46f-2193b40ec46e" in namespace "var-expansion-406" to be "success or failure"
Jan  6 14:39:51.991: INFO: Pod "var-expansion-27247351-f78f-441c-b46f-2193b40ec46e": Phase="Pending", Reason="", readiness=false. Elapsed: 14.01063ms
Jan  6 14:39:54.002: INFO: Pod "var-expansion-27247351-f78f-441c-b46f-2193b40ec46e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025463083s
Jan  6 14:39:56.014: INFO: Pod "var-expansion-27247351-f78f-441c-b46f-2193b40ec46e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036877863s
Jan  6 14:39:58.031: INFO: Pod "var-expansion-27247351-f78f-441c-b46f-2193b40ec46e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053517751s
Jan  6 14:40:00.053: INFO: Pod "var-expansion-27247351-f78f-441c-b46f-2193b40ec46e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075800089s
STEP: Saw pod success
Jan  6 14:40:00.053: INFO: Pod "var-expansion-27247351-f78f-441c-b46f-2193b40ec46e" satisfied condition "success or failure"
Jan  6 14:40:00.057: INFO: Trying to get logs from node iruya-node pod var-expansion-27247351-f78f-441c-b46f-2193b40ec46e container dapi-container: 
STEP: delete the pod
Jan  6 14:40:00.731: INFO: Waiting for pod var-expansion-27247351-f78f-441c-b46f-2193b40ec46e to disappear
Jan  6 14:40:00.746: INFO: Pod var-expansion-27247351-f78f-441c-b46f-2193b40ec46e no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:40:00.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-406" for this suite.
Jan  6 14:40:06.863: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:40:07.028: INFO: namespace var-expansion-406 deletion completed in 6.236550645s

• [SLOW TEST:15.232 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:40:07.028: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  6 14:40:07.163: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1ab152db-2fbd-405b-b3df-0f01adcb114f" in namespace "downward-api-9193" to be "success or failure"
Jan  6 14:40:07.174: INFO: Pod "downwardapi-volume-1ab152db-2fbd-405b-b3df-0f01adcb114f": Phase="Pending", Reason="", readiness=false. Elapsed: 11.298166ms
Jan  6 14:40:09.188: INFO: Pod "downwardapi-volume-1ab152db-2fbd-405b-b3df-0f01adcb114f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025024672s
Jan  6 14:40:11.196: INFO: Pod "downwardapi-volume-1ab152db-2fbd-405b-b3df-0f01adcb114f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033366053s
Jan  6 14:40:13.205: INFO: Pod "downwardapi-volume-1ab152db-2fbd-405b-b3df-0f01adcb114f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042643518s
Jan  6 14:40:15.214: INFO: Pod "downwardapi-volume-1ab152db-2fbd-405b-b3df-0f01adcb114f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0511876s
STEP: Saw pod success
Jan  6 14:40:15.214: INFO: Pod "downwardapi-volume-1ab152db-2fbd-405b-b3df-0f01adcb114f" satisfied condition "success or failure"
Jan  6 14:40:15.220: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-1ab152db-2fbd-405b-b3df-0f01adcb114f container client-container: 
STEP: delete the pod
Jan  6 14:40:15.350: INFO: Waiting for pod downwardapi-volume-1ab152db-2fbd-405b-b3df-0f01adcb114f to disappear
Jan  6 14:40:15.362: INFO: Pod downwardapi-volume-1ab152db-2fbd-405b-b3df-0f01adcb114f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:40:15.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9193" for this suite.
Jan  6 14:40:21.390: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:40:21.532: INFO: namespace downward-api-9193 deletion completed in 6.162018954s

• [SLOW TEST:14.503 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:40:21.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-6541
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  6 14:40:21.663: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  6 14:40:53.999: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6541 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 14:40:53.999: INFO: >>> kubeConfig: /root/.kube/config
I0106 14:40:54.072206       8 log.go:172] (0xc001742a50) (0xc000ab3360) Create stream
I0106 14:40:54.072266       8 log.go:172] (0xc001742a50) (0xc000ab3360) Stream added, broadcasting: 1
I0106 14:40:54.079361       8 log.go:172] (0xc001742a50) Reply frame received for 1
I0106 14:40:54.079436       8 log.go:172] (0xc001742a50) (0xc001ba2c80) Create stream
I0106 14:40:54.079449       8 log.go:172] (0xc001742a50) (0xc001ba2c80) Stream added, broadcasting: 3
I0106 14:40:54.081239       8 log.go:172] (0xc001742a50) Reply frame received for 3
I0106 14:40:54.081265       8 log.go:172] (0xc001742a50) (0xc0019820a0) Create stream
I0106 14:40:54.081271       8 log.go:172] (0xc001742a50) (0xc0019820a0) Stream added, broadcasting: 5
I0106 14:40:54.082516       8 log.go:172] (0xc001742a50) Reply frame received for 5
I0106 14:40:55.209585       8 log.go:172] (0xc001742a50) Data frame received for 3
I0106 14:40:55.209639       8 log.go:172] (0xc001ba2c80) (3) Data frame handling
I0106 14:40:55.209662       8 log.go:172] (0xc001ba2c80) (3) Data frame sent
I0106 14:40:55.401371       8 log.go:172] (0xc001742a50) (0xc001ba2c80) Stream removed, broadcasting: 3
I0106 14:40:55.401741       8 log.go:172] (0xc001742a50) (0xc0019820a0) Stream removed, broadcasting: 5
I0106 14:40:55.401820       8 log.go:172] (0xc001742a50) Data frame received for 1
I0106 14:40:55.401854       8 log.go:172] (0xc000ab3360) (1) Data frame handling
I0106 14:40:55.401891       8 log.go:172] (0xc000ab3360) (1) Data frame sent
I0106 14:40:55.401920       8 log.go:172] (0xc001742a50) (0xc000ab3360) Stream removed, broadcasting: 1
I0106 14:40:55.401952       8 log.go:172] (0xc001742a50) Go away received
I0106 14:40:55.402286       8 log.go:172] (0xc001742a50) (0xc000ab3360) Stream removed, broadcasting: 1
I0106 14:40:55.402317       8 log.go:172] (0xc001742a50) (0xc001ba2c80) Stream removed, broadcasting: 3
I0106 14:40:55.402331       8 log.go:172] (0xc001742a50) (0xc0019820a0) Stream removed, broadcasting: 5
Jan  6 14:40:55.402: INFO: Found all expected endpoints: [netserver-0]
Jan  6 14:40:55.413: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6541 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 14:40:55.413: INFO: >>> kubeConfig: /root/.kube/config
I0106 14:40:55.488737       8 log.go:172] (0xc000a9d080) (0xc001982be0) Create stream
I0106 14:40:55.488775       8 log.go:172] (0xc000a9d080) (0xc001982be0) Stream added, broadcasting: 1
I0106 14:40:55.495455       8 log.go:172] (0xc000a9d080) Reply frame received for 1
I0106 14:40:55.495513       8 log.go:172] (0xc000a9d080) (0xc000a0da40) Create stream
I0106 14:40:55.495531       8 log.go:172] (0xc000a9d080) (0xc000a0da40) Stream added, broadcasting: 3
I0106 14:40:55.497118       8 log.go:172] (0xc000a9d080) Reply frame received for 3
I0106 14:40:55.497161       8 log.go:172] (0xc000a9d080) (0xc000ab3400) Create stream
I0106 14:40:55.497175       8 log.go:172] (0xc000a9d080) (0xc000ab3400) Stream added, broadcasting: 5
I0106 14:40:55.498482       8 log.go:172] (0xc000a9d080) Reply frame received for 5
I0106 14:40:56.620889       8 log.go:172] (0xc000a9d080) Data frame received for 3
I0106 14:40:56.621001       8 log.go:172] (0xc000a0da40) (3) Data frame handling
I0106 14:40:56.621048       8 log.go:172] (0xc000a0da40) (3) Data frame sent
I0106 14:40:56.826415       8 log.go:172] (0xc000a9d080) Data frame received for 1
I0106 14:40:56.826519       8 log.go:172] (0xc001982be0) (1) Data frame handling
I0106 14:40:56.826621       8 log.go:172] (0xc001982be0) (1) Data frame sent
I0106 14:40:56.826755       8 log.go:172] (0xc000a9d080) (0xc001982be0) Stream removed, broadcasting: 1
I0106 14:40:56.827152       8 log.go:172] (0xc000a9d080) (0xc000a0da40) Stream removed, broadcasting: 3
I0106 14:40:56.827216       8 log.go:172] (0xc000a9d080) (0xc000ab3400) Stream removed, broadcasting: 5
I0106 14:40:56.827307       8 log.go:172] (0xc000a9d080) (0xc001982be0) Stream removed, broadcasting: 1
I0106 14:40:56.827324       8 log.go:172] (0xc000a9d080) (0xc000a0da40) Stream removed, broadcasting: 3
I0106 14:40:56.827339       8 log.go:172] (0xc000a9d080) (0xc000ab3400) Stream removed, broadcasting: 5
I0106 14:40:56.827574       8 log.go:172] (0xc000a9d080) Go away received
Jan  6 14:40:56.828: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:40:56.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6541" for this suite.
Jan  6 14:41:20.877: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:41:21.079: INFO: namespace pod-network-test-6541 deletion completed in 24.237986204s

• [SLOW TEST:59.546 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:41:21.079: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  6 14:41:21.159: INFO: Waiting up to 5m0s for pod "downward-api-5d0c0445-3157-4a8b-a2be-210f64c4177d" in namespace "downward-api-1152" to be "success or failure"
Jan  6 14:41:21.166: INFO: Pod "downward-api-5d0c0445-3157-4a8b-a2be-210f64c4177d": Phase="Pending", Reason="", readiness=false. Elapsed: 7.429144ms
Jan  6 14:41:23.174: INFO: Pod "downward-api-5d0c0445-3157-4a8b-a2be-210f64c4177d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015432381s
Jan  6 14:41:25.185: INFO: Pod "downward-api-5d0c0445-3157-4a8b-a2be-210f64c4177d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02638679s
Jan  6 14:41:27.227: INFO: Pod "downward-api-5d0c0445-3157-4a8b-a2be-210f64c4177d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068054901s
Jan  6 14:41:29.236: INFO: Pod "downward-api-5d0c0445-3157-4a8b-a2be-210f64c4177d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.076623035s
STEP: Saw pod success
Jan  6 14:41:29.236: INFO: Pod "downward-api-5d0c0445-3157-4a8b-a2be-210f64c4177d" satisfied condition "success or failure"
Jan  6 14:41:29.239: INFO: Trying to get logs from node iruya-node pod downward-api-5d0c0445-3157-4a8b-a2be-210f64c4177d container dapi-container: 
STEP: delete the pod
Jan  6 14:41:29.283: INFO: Waiting for pod downward-api-5d0c0445-3157-4a8b-a2be-210f64c4177d to disappear
Jan  6 14:41:29.289: INFO: Pod downward-api-5d0c0445-3157-4a8b-a2be-210f64c4177d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:41:29.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1152" for this suite.
Jan  6 14:41:35.315: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:41:35.419: INFO: namespace downward-api-1152 deletion completed in 6.122715287s

• [SLOW TEST:14.341 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:41:35.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  6 14:41:35.568: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8129d3c0-7769-477a-b672-86a5020a3137" in namespace "downward-api-1051" to be "success or failure"
Jan  6 14:41:35.584: INFO: Pod "downwardapi-volume-8129d3c0-7769-477a-b672-86a5020a3137": Phase="Pending", Reason="", readiness=false. Elapsed: 16.376576ms
Jan  6 14:41:37.596: INFO: Pod "downwardapi-volume-8129d3c0-7769-477a-b672-86a5020a3137": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028097871s
Jan  6 14:41:39.603: INFO: Pod "downwardapi-volume-8129d3c0-7769-477a-b672-86a5020a3137": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034924556s
Jan  6 14:41:41.612: INFO: Pod "downwardapi-volume-8129d3c0-7769-477a-b672-86a5020a3137": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044186925s
Jan  6 14:41:43.628: INFO: Pod "downwardapi-volume-8129d3c0-7769-477a-b672-86a5020a3137": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059976111s
Jan  6 14:41:45.637: INFO: Pod "downwardapi-volume-8129d3c0-7769-477a-b672-86a5020a3137": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069125884s
STEP: Saw pod success
Jan  6 14:41:45.637: INFO: Pod "downwardapi-volume-8129d3c0-7769-477a-b672-86a5020a3137" satisfied condition "success or failure"
Jan  6 14:41:45.642: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8129d3c0-7769-477a-b672-86a5020a3137 container client-container: 
STEP: delete the pod
Jan  6 14:41:45.720: INFO: Waiting for pod downwardapi-volume-8129d3c0-7769-477a-b672-86a5020a3137 to disappear
Jan  6 14:41:45.726: INFO: Pod downwardapi-volume-8129d3c0-7769-477a-b672-86a5020a3137 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:41:45.726: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1051" for this suite.
Jan  6 14:41:51.836: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:41:51.976: INFO: namespace downward-api-1051 deletion completed in 6.243894118s

• [SLOW TEST:16.555 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:41:51.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  6 14:41:52.101: INFO: Waiting up to 5m0s for pod "downwardapi-volume-88437696-9c51-454f-ac19-06303c7d4111" in namespace "projected-573" to be "success or failure"
Jan  6 14:41:52.124: INFO: Pod "downwardapi-volume-88437696-9c51-454f-ac19-06303c7d4111": Phase="Pending", Reason="", readiness=false. Elapsed: 23.597348ms
Jan  6 14:41:54.133: INFO: Pod "downwardapi-volume-88437696-9c51-454f-ac19-06303c7d4111": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032528738s
Jan  6 14:41:56.159: INFO: Pod "downwardapi-volume-88437696-9c51-454f-ac19-06303c7d4111": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057944183s
Jan  6 14:41:58.169: INFO: Pod "downwardapi-volume-88437696-9c51-454f-ac19-06303c7d4111": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068172091s
Jan  6 14:42:00.180: INFO: Pod "downwardapi-volume-88437696-9c51-454f-ac19-06303c7d4111": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079782208s
STEP: Saw pod success
Jan  6 14:42:00.180: INFO: Pod "downwardapi-volume-88437696-9c51-454f-ac19-06303c7d4111" satisfied condition "success or failure"
Jan  6 14:42:00.184: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-88437696-9c51-454f-ac19-06303c7d4111 container client-container: 
STEP: delete the pod
Jan  6 14:42:00.224: INFO: Waiting for pod downwardapi-volume-88437696-9c51-454f-ac19-06303c7d4111 to disappear
Jan  6 14:42:00.269: INFO: Pod downwardapi-volume-88437696-9c51-454f-ac19-06303c7d4111 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:42:00.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-573" for this suite.
Jan  6 14:42:06.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:42:06.586: INFO: namespace projected-573 deletion completed in 6.311003747s

• [SLOW TEST:14.610 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:42:06.586: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Starting the proxy
Jan  6 14:42:06.692: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix146422157/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:42:06.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6666" for this suite.
Jan  6 14:42:12.839: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:42:12.995: INFO: namespace kubectl-6666 deletion completed in 6.209497298s

• [SLOW TEST:6.409 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:42:12.996: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  6 14:42:13.113: INFO: Waiting up to 5m0s for pod "pod-d190e2af-e89d-4ffc-a2d7-02e5ae8d7155" in namespace "emptydir-4221" to be "success or failure"
Jan  6 14:42:13.124: INFO: Pod "pod-d190e2af-e89d-4ffc-a2d7-02e5ae8d7155": Phase="Pending", Reason="", readiness=false. Elapsed: 10.694037ms
Jan  6 14:42:15.141: INFO: Pod "pod-d190e2af-e89d-4ffc-a2d7-02e5ae8d7155": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027835103s
Jan  6 14:42:17.150: INFO: Pod "pod-d190e2af-e89d-4ffc-a2d7-02e5ae8d7155": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036293365s
Jan  6 14:42:19.158: INFO: Pod "pod-d190e2af-e89d-4ffc-a2d7-02e5ae8d7155": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044704315s
Jan  6 14:42:21.165: INFO: Pod "pod-d190e2af-e89d-4ffc-a2d7-02e5ae8d7155": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051398709s
STEP: Saw pod success
Jan  6 14:42:21.165: INFO: Pod "pod-d190e2af-e89d-4ffc-a2d7-02e5ae8d7155" satisfied condition "success or failure"
Jan  6 14:42:21.168: INFO: Trying to get logs from node iruya-node pod pod-d190e2af-e89d-4ffc-a2d7-02e5ae8d7155 container test-container: 
STEP: delete the pod
Jan  6 14:42:21.227: INFO: Waiting for pod pod-d190e2af-e89d-4ffc-a2d7-02e5ae8d7155 to disappear
Jan  6 14:42:21.305: INFO: Pod pod-d190e2af-e89d-4ffc-a2d7-02e5ae8d7155 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:42:21.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4221" for this suite.
Jan  6 14:42:27.346: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:42:27.482: INFO: namespace emptydir-4221 deletion completed in 6.169219998s

• [SLOW TEST:14.486 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:42:27.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0644 on tmpfs
Jan  6 14:42:27.564: INFO: Waiting up to 5m0s for pod "pod-434f16c6-703a-4a19-98e4-ca9767d28def" in namespace "emptydir-4430" to be "success or failure"
Jan  6 14:42:27.651: INFO: Pod "pod-434f16c6-703a-4a19-98e4-ca9767d28def": Phase="Pending", Reason="", readiness=false. Elapsed: 86.347415ms
Jan  6 14:42:29.663: INFO: Pod "pod-434f16c6-703a-4a19-98e4-ca9767d28def": Phase="Pending", Reason="", readiness=false. Elapsed: 2.098472737s
Jan  6 14:42:31.672: INFO: Pod "pod-434f16c6-703a-4a19-98e4-ca9767d28def": Phase="Pending", Reason="", readiness=false. Elapsed: 4.107420856s
Jan  6 14:42:33.687: INFO: Pod "pod-434f16c6-703a-4a19-98e4-ca9767d28def": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122604846s
Jan  6 14:42:35.707: INFO: Pod "pod-434f16c6-703a-4a19-98e4-ca9767d28def": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.142444664s
STEP: Saw pod success
Jan  6 14:42:35.707: INFO: Pod "pod-434f16c6-703a-4a19-98e4-ca9767d28def" satisfied condition "success or failure"
Jan  6 14:42:35.715: INFO: Trying to get logs from node iruya-node pod pod-434f16c6-703a-4a19-98e4-ca9767d28def container test-container: 
STEP: delete the pod
Jan  6 14:42:35.825: INFO: Waiting for pod pod-434f16c6-703a-4a19-98e4-ca9767d28def to disappear
Jan  6 14:42:35.831: INFO: Pod pod-434f16c6-703a-4a19-98e4-ca9767d28def no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:42:35.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4430" for this suite.
Jan  6 14:42:41.914: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:42:42.099: INFO: namespace emptydir-4430 deletion completed in 6.259531655s

• [SLOW TEST:14.616 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:42:42.099: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward api env vars
Jan  6 14:42:42.213: INFO: Waiting up to 5m0s for pod "downward-api-fbaa8ddf-5eb9-4bd6-9e01-40732aaf988f" in namespace "downward-api-8449" to be "success or failure"
Jan  6 14:42:42.235: INFO: Pod "downward-api-fbaa8ddf-5eb9-4bd6-9e01-40732aaf988f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.977366ms
Jan  6 14:42:44.916: INFO: Pod "downward-api-fbaa8ddf-5eb9-4bd6-9e01-40732aaf988f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.702770849s
Jan  6 14:42:46.925: INFO: Pod "downward-api-fbaa8ddf-5eb9-4bd6-9e01-40732aaf988f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.712429523s
Jan  6 14:42:48.937: INFO: Pod "downward-api-fbaa8ddf-5eb9-4bd6-9e01-40732aaf988f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.724529702s
Jan  6 14:42:50.949: INFO: Pod "downward-api-fbaa8ddf-5eb9-4bd6-9e01-40732aaf988f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.7362893s
STEP: Saw pod success
Jan  6 14:42:50.949: INFO: Pod "downward-api-fbaa8ddf-5eb9-4bd6-9e01-40732aaf988f" satisfied condition "success or failure"
Jan  6 14:42:50.956: INFO: Trying to get logs from node iruya-node pod downward-api-fbaa8ddf-5eb9-4bd6-9e01-40732aaf988f container dapi-container: 
STEP: delete the pod
Jan  6 14:42:51.050: INFO: Waiting for pod downward-api-fbaa8ddf-5eb9-4bd6-9e01-40732aaf988f to disappear
Jan  6 14:42:51.117: INFO: Pod downward-api-fbaa8ddf-5eb9-4bd6-9e01-40732aaf988f no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:42:51.118: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8449" for this suite.
Jan  6 14:42:57.164: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:42:57.355: INFO: namespace downward-api-8449 deletion completed in 6.224454006s

• [SLOW TEST:15.256 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:42:57.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  6 14:43:06.095: INFO: Successfully updated pod "pod-update-ccc08117-6b03-4e31-8d73-32e4c38c44b3"
STEP: verifying the updated pod is in kubernetes
Jan  6 14:43:06.109: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:43:06.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3170" for this suite.
Jan  6 14:43:18.209: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:43:18.298: INFO: namespace pods-3170 deletion completed in 12.181524824s

• [SLOW TEST:20.942 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:43:18.298: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan  6 14:43:18.531: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3412,SelfLink:/api/v1/namespaces/watch-3412/configmaps/e2e-watch-test-label-changed,UID:d284d10e-5a88-494b-a284-303a812df3f0,ResourceVersion:19536446,Generation:0,CreationTimestamp:2020-01-06 14:43:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan  6 14:43:18.532: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3412,SelfLink:/api/v1/namespaces/watch-3412/configmaps/e2e-watch-test-label-changed,UID:d284d10e-5a88-494b-a284-303a812df3f0,ResourceVersion:19536447,Generation:0,CreationTimestamp:2020-01-06 14:43:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan  6 14:43:18.532: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3412,SelfLink:/api/v1/namespaces/watch-3412/configmaps/e2e-watch-test-label-changed,UID:d284d10e-5a88-494b-a284-303a812df3f0,ResourceVersion:19536448,Generation:0,CreationTimestamp:2020-01-06 14:43:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan  6 14:43:28.608: INFO: Got : ADDED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3412,SelfLink:/api/v1/namespaces/watch-3412/configmaps/e2e-watch-test-label-changed,UID:d284d10e-5a88-494b-a284-303a812df3f0,ResourceVersion:19536463,Generation:0,CreationTimestamp:2020-01-06 14:43:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  6 14:43:28.608: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3412,SelfLink:/api/v1/namespaces/watch-3412/configmaps/e2e-watch-test-label-changed,UID:d284d10e-5a88-494b-a284-303a812df3f0,ResourceVersion:19536464,Generation:0,CreationTimestamp:2020-01-06 14:43:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan  6 14:43:28.609: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-label-changed,GenerateName:,Namespace:watch-3412,SelfLink:/api/v1/namespaces/watch-3412/configmaps/e2e-watch-test-label-changed,UID:d284d10e-5a88-494b-a284-303a812df3f0,ResourceVersion:19536465,Generation:0,CreationTimestamp:2020-01-06 14:43:18 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: label-changed-and-restored,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:43:28.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-3412" for this suite.
Jan  6 14:43:34.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:43:34.833: INFO: namespace watch-3412 deletion completed in 6.214877213s

• [SLOW TEST:16.534 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:43:34.834: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-4efeaf66-2bee-41de-bdbb-4c2f860f2730
STEP: Creating a pod to test consume configMaps
Jan  6 14:43:34.978: INFO: Waiting up to 5m0s for pod "pod-configmaps-92ba353b-cfb7-45aa-ac76-8dca5e5e98af" in namespace "configmap-4865" to be "success or failure"
Jan  6 14:43:34.982: INFO: Pod "pod-configmaps-92ba353b-cfb7-45aa-ac76-8dca5e5e98af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.273647ms
Jan  6 14:43:36.990: INFO: Pod "pod-configmaps-92ba353b-cfb7-45aa-ac76-8dca5e5e98af": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012714678s
Jan  6 14:43:39.002: INFO: Pod "pod-configmaps-92ba353b-cfb7-45aa-ac76-8dca5e5e98af": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024250502s
Jan  6 14:43:41.181: INFO: Pod "pod-configmaps-92ba353b-cfb7-45aa-ac76-8dca5e5e98af": Phase="Pending", Reason="", readiness=false. Elapsed: 6.203380363s
Jan  6 14:43:43.189: INFO: Pod "pod-configmaps-92ba353b-cfb7-45aa-ac76-8dca5e5e98af": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.211164543s
STEP: Saw pod success
Jan  6 14:43:43.189: INFO: Pod "pod-configmaps-92ba353b-cfb7-45aa-ac76-8dca5e5e98af" satisfied condition "success or failure"
Jan  6 14:43:43.192: INFO: Trying to get logs from node iruya-node pod pod-configmaps-92ba353b-cfb7-45aa-ac76-8dca5e5e98af container configmap-volume-test: 
STEP: delete the pod
Jan  6 14:43:43.251: INFO: Waiting for pod pod-configmaps-92ba353b-cfb7-45aa-ac76-8dca5e5e98af to disappear
Jan  6 14:43:43.439: INFO: Pod pod-configmaps-92ba353b-cfb7-45aa-ac76-8dca5e5e98af no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:43:43.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4865" for this suite.
Jan  6 14:43:49.479: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:43:49.651: INFO: namespace configmap-4865 deletion completed in 6.202956623s

• [SLOW TEST:14.818 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:43:49.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan  6 14:43:49.805: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6216,SelfLink:/api/v1/namespaces/watch-6216/configmaps/e2e-watch-test-resource-version,UID:fa41ec88-8251-4c97-a484-88c6c0db7a3a,ResourceVersion:19536531,Generation:0,CreationTimestamp:2020-01-06 14:43:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan  6 14:43:49.806: INFO: Got : DELETED &ConfigMap{ObjectMeta:k8s_io_apimachinery_pkg_apis_meta_v1.ObjectMeta{Name:e2e-watch-test-resource-version,GenerateName:,Namespace:watch-6216,SelfLink:/api/v1/namespaces/watch-6216/configmaps/e2e-watch-test-resource-version,UID:fa41ec88-8251-4c97-a484-88c6c0db7a3a,ResourceVersion:19536532,Generation:0,CreationTimestamp:2020-01-06 14:43:49 +0000 UTC,DeletionTimestamp:,DeletionGracePeriodSeconds:nil,Labels:map[string]string{watch-this-configmap: from-resource-version,},Annotations:map[string]string{},OwnerReferences:[],Finalizers:[],ClusterName:,Initializers:nil,ManagedFields:[],},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:43:49.806: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6216" for this suite.
Jan  6 14:43:55.832: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:43:55.938: INFO: namespace watch-6216 deletion completed in 6.128400852s

• [SLOW TEST:6.287 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:43:55.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0106 14:44:11.707524       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  6 14:44:11.707: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:44:11.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3557" for this suite.
Jan  6 14:44:25.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:44:25.256: INFO: namespace gc-3557 deletion completed in 12.970291088s

• [SLOW TEST:29.317 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:44:25.257: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:60
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:75
STEP: Creating service test in namespace statefulset-4208
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-4208
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-4208
Jan  6 14:44:25.504: INFO: Found 0 stateful pods, waiting for 1
Jan  6 14:44:35.513: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan  6 14:44:35.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  6 14:44:38.118: INFO: stderr: "I0106 14:44:37.667307    3721 log.go:172] (0xc00012a790) (0xc00065e140) Create stream\nI0106 14:44:37.667525    3721 log.go:172] (0xc00012a790) (0xc00065e140) Stream added, broadcasting: 1\nI0106 14:44:37.678508    3721 log.go:172] (0xc00012a790) Reply frame received for 1\nI0106 14:44:37.678591    3721 log.go:172] (0xc00012a790) (0xc0007160a0) Create stream\nI0106 14:44:37.678605    3721 log.go:172] (0xc00012a790) (0xc0007160a0) Stream added, broadcasting: 3\nI0106 14:44:37.681801    3721 log.go:172] (0xc00012a790) Reply frame received for 3\nI0106 14:44:37.681847    3721 log.go:172] (0xc00012a790) (0xc00036e000) Create stream\nI0106 14:44:37.681874    3721 log.go:172] (0xc00012a790) (0xc00036e000) Stream added, broadcasting: 5\nI0106 14:44:37.684120    3721 log.go:172] (0xc00012a790) Reply frame received for 5\nI0106 14:44:37.875532    3721 log.go:172] (0xc00012a790) Data frame received for 5\nI0106 14:44:37.875701    3721 log.go:172] (0xc00036e000) (5) Data frame handling\nI0106 14:44:37.875743    3721 log.go:172] (0xc00036e000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0106 14:44:37.981350    3721 log.go:172] (0xc00012a790) Data frame received for 3\nI0106 14:44:37.981446    3721 log.go:172] (0xc0007160a0) (3) Data frame handling\nI0106 14:44:37.981490    3721 log.go:172] (0xc0007160a0) (3) Data frame sent\nI0106 14:44:38.105303    3721 log.go:172] (0xc00012a790) Data frame received for 1\nI0106 14:44:38.105442    3721 log.go:172] (0xc00012a790) (0xc0007160a0) Stream removed, broadcasting: 3\nI0106 14:44:38.105614    3721 log.go:172] (0xc00012a790) (0xc00036e000) Stream removed, broadcasting: 5\nI0106 14:44:38.105661    3721 log.go:172] (0xc00065e140) (1) Data frame handling\nI0106 14:44:38.105690    3721 log.go:172] (0xc00065e140) (1) Data frame sent\nI0106 14:44:38.105706    3721 log.go:172] (0xc00012a790) (0xc00065e140) Stream removed, broadcasting: 1\nI0106 14:44:38.105721    3721 log.go:172] (0xc00012a790) Go away received\nI0106 14:44:38.107290    3721 log.go:172] (0xc00012a790) (0xc00065e140) Stream removed, broadcasting: 1\nI0106 14:44:38.107300    3721 log.go:172] (0xc00012a790) (0xc0007160a0) Stream removed, broadcasting: 3\nI0106 14:44:38.107311    3721 log.go:172] (0xc00012a790) (0xc00036e000) Stream removed, broadcasting: 5\n"
Jan  6 14:44:38.118: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  6 14:44:38.118: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  6 14:44:38.130: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan  6 14:44:48.174: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  6 14:44:48.174: INFO: Waiting for statefulset status.replicas updated to 0
Jan  6 14:44:48.237: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999758s
Jan  6 14:44:49.246: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.95116815s
Jan  6 14:44:50.267: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.942839852s
Jan  6 14:44:51.283: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.921576887s
Jan  6 14:44:52.290: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.905843996s
Jan  6 14:44:53.693: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.898563008s
Jan  6 14:44:54.718: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.495143279s
Jan  6 14:44:55.728: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.470688218s
Jan  6 14:44:56.741: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.460240162s
Jan  6 14:44:57.755: INFO: Verifying statefulset ss doesn't scale past 1 for another 447.703302ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-4208
Jan  6 14:44:58.770: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:44:59.327: INFO: stderr: "I0106 14:44:59.027451    3745 log.go:172] (0xc000a28000) (0xc000a1c1e0) Create stream\nI0106 14:44:59.027708    3745 log.go:172] (0xc000a28000) (0xc000a1c1e0) Stream added, broadcasting: 1\nI0106 14:44:59.038144    3745 log.go:172] (0xc000a28000) Reply frame received for 1\nI0106 14:44:59.038254    3745 log.go:172] (0xc000a28000) (0xc000a1c320) Create stream\nI0106 14:44:59.038281    3745 log.go:172] (0xc000a28000) (0xc000a1c320) Stream added, broadcasting: 3\nI0106 14:44:59.040569    3745 log.go:172] (0xc000a28000) Reply frame received for 3\nI0106 14:44:59.040599    3745 log.go:172] (0xc000a28000) (0xc000748280) Create stream\nI0106 14:44:59.040611    3745 log.go:172] (0xc000a28000) (0xc000748280) Stream added, broadcasting: 5\nI0106 14:44:59.043539    3745 log.go:172] (0xc000a28000) Reply frame received for 5\nI0106 14:44:59.146601    3745 log.go:172] (0xc000a28000) Data frame received for 5\nI0106 14:44:59.147009    3745 log.go:172] (0xc000748280) (5) Data frame handling\nI0106 14:44:59.147140    3745 log.go:172] (0xc000748280) (5) Data frame sent\nI0106 14:44:59.147205    3745 log.go:172] (0xc000a28000) Data frame received for 3\nI0106 14:44:59.147236    3745 log.go:172] (0xc000a1c320) (3) Data frame handling\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0106 14:44:59.147336    3745 log.go:172] (0xc000a1c320) (3) Data frame sent\nI0106 14:44:59.303095    3745 log.go:172] (0xc000a28000) Data frame received for 1\nI0106 14:44:59.303360    3745 log.go:172] (0xc000a28000) (0xc000a1c320) Stream removed, broadcasting: 3\nI0106 14:44:59.303787    3745 log.go:172] (0xc000a1c1e0) (1) Data frame handling\nI0106 14:44:59.303977    3745 log.go:172] (0xc000a1c1e0) (1) Data frame sent\nI0106 14:44:59.304097    3745 log.go:172] (0xc000a28000) (0xc000748280) Stream removed, broadcasting: 5\nI0106 14:44:59.304252    3745 log.go:172] (0xc000a28000) (0xc000a1c1e0) Stream removed, broadcasting: 1\nI0106 14:44:59.304305    3745 log.go:172] (0xc000a28000) Go away received\nI0106 14:44:59.307622    3745 log.go:172] (0xc000a28000) (0xc000a1c1e0) Stream removed, broadcasting: 1\nI0106 14:44:59.307667    3745 log.go:172] (0xc000a28000) (0xc000a1c320) Stream removed, broadcasting: 3\nI0106 14:44:59.307688    3745 log.go:172] (0xc000a28000) (0xc000748280) Stream removed, broadcasting: 5\n"
Jan  6 14:44:59.327: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  6 14:44:59.327: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  6 14:44:59.349: INFO: Found 2 stateful pods, waiting for 3
Jan  6 14:45:09.360: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 14:45:09.360: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 14:45:09.360: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan  6 14:45:19.360: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 14:45:19.360: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan  6 14:45:19.360: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan  6 14:45:19.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-0 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  6 14:45:19.945: INFO: stderr: "I0106 14:45:19.595326    3766 log.go:172] (0xc000116dc0) (0xc00035a6e0) Create stream\nI0106 14:45:19.595802    3766 log.go:172] (0xc000116dc0) (0xc00035a6e0) Stream added, broadcasting: 1\nI0106 14:45:19.606593    3766 log.go:172] (0xc000116dc0) Reply frame received for 1\nI0106 14:45:19.606703    3766 log.go:172] (0xc000116dc0) (0xc000850000) Create stream\nI0106 14:45:19.606718    3766 log.go:172] (0xc000116dc0) (0xc000850000) Stream added, broadcasting: 3\nI0106 14:45:19.608910    3766 log.go:172] (0xc000116dc0) Reply frame received for 3\nI0106 14:45:19.608937    3766 log.go:172] (0xc000116dc0) (0xc0008500a0) Create stream\nI0106 14:45:19.608948    3766 log.go:172] (0xc000116dc0) (0xc0008500a0) Stream added, broadcasting: 5\nI0106 14:45:19.610675    3766 log.go:172] (0xc000116dc0) Reply frame received for 5\nI0106 14:45:19.752860    3766 log.go:172] (0xc000116dc0) Data frame received for 3\nI0106 14:45:19.753245    3766 log.go:172] (0xc000850000) (3) Data frame handling\nI0106 14:45:19.753424    3766 log.go:172] (0xc000116dc0) Data frame received for 5\nI0106 14:45:19.753482    3766 log.go:172] (0xc0008500a0) (5) Data frame handling\nI0106 14:45:19.753505    3766 log.go:172] (0xc0008500a0) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0106 14:45:19.753565    3766 log.go:172] (0xc000850000) (3) Data frame sent\nI0106 14:45:19.933934    3766 log.go:172] (0xc000116dc0) (0xc0008500a0) Stream removed, broadcasting: 5\nI0106 14:45:19.934163    3766 log.go:172] (0xc000116dc0) Data frame received for 1\nI0106 14:45:19.934459    3766 log.go:172] (0xc000116dc0) (0xc000850000) Stream removed, broadcasting: 3\nI0106 14:45:19.934507    3766 log.go:172] (0xc00035a6e0) (1) Data frame handling\nI0106 14:45:19.934579    3766 log.go:172] (0xc00035a6e0) (1) Data frame sent\nI0106 14:45:19.934593    3766 log.go:172] (0xc000116dc0) (0xc00035a6e0) Stream removed, broadcasting: 1\nI0106 14:45:19.934613    3766 log.go:172] (0xc000116dc0) Go away received\nI0106 14:45:19.936232    3766 log.go:172] (0xc000116dc0) (0xc00035a6e0) Stream removed, broadcasting: 1\nI0106 14:45:19.936245    3766 log.go:172] (0xc000116dc0) (0xc000850000) Stream removed, broadcasting: 3\nI0106 14:45:19.936257    3766 log.go:172] (0xc000116dc0) (0xc0008500a0) Stream removed, broadcasting: 5\n"
Jan  6 14:45:19.945: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  6 14:45:19.945: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-0: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  6 14:45:19.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-1 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  6 14:45:20.497: INFO: stderr: "I0106 14:45:20.160934    3785 log.go:172] (0xc000a8e420) (0xc00037c6e0) Create stream\nI0106 14:45:20.161161    3785 log.go:172] (0xc000a8e420) (0xc00037c6e0) Stream added, broadcasting: 1\nI0106 14:45:20.165546    3785 log.go:172] (0xc000a8e420) Reply frame received for 1\nI0106 14:45:20.165626    3785 log.go:172] (0xc000a8e420) (0xc0006863c0) Create stream\nI0106 14:45:20.165638    3785 log.go:172] (0xc000a8e420) (0xc0006863c0) Stream added, broadcasting: 3\nI0106 14:45:20.166710    3785 log.go:172] (0xc000a8e420) Reply frame received for 3\nI0106 14:45:20.166760    3785 log.go:172] (0xc000a8e420) (0xc0009e2000) Create stream\nI0106 14:45:20.166786    3785 log.go:172] (0xc000a8e420) (0xc0009e2000) Stream added, broadcasting: 5\nI0106 14:45:20.168048    3785 log.go:172] (0xc000a8e420) Reply frame received for 5\nI0106 14:45:20.335731    3785 log.go:172] (0xc000a8e420) Data frame received for 5\nI0106 14:45:20.335825    3785 log.go:172] (0xc0009e2000) (5) Data frame handling\nI0106 14:45:20.335866    3785 log.go:172] (0xc0009e2000) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0106 14:45:20.393021    3785 log.go:172] (0xc000a8e420) Data frame received for 3\nI0106 14:45:20.393114    3785 log.go:172] (0xc0006863c0) (3) Data frame handling\nI0106 14:45:20.393142    3785 log.go:172] (0xc0006863c0) (3) Data frame sent\nI0106 14:45:20.482472    3785 log.go:172] (0xc000a8e420) Data frame received for 1\nI0106 14:45:20.482659    3785 log.go:172] (0xc000a8e420) (0xc0006863c0) Stream removed, broadcasting: 3\nI0106 14:45:20.482909    3785 log.go:172] (0xc000a8e420) (0xc0009e2000) Stream removed, broadcasting: 5\nI0106 14:45:20.483023    3785 log.go:172] (0xc00037c6e0) (1) Data frame handling\nI0106 14:45:20.483064    3785 log.go:172] (0xc00037c6e0) (1) Data frame sent\nI0106 14:45:20.483079    3785 log.go:172] (0xc000a8e420) (0xc00037c6e0) Stream removed, broadcasting: 1\nI0106 14:45:20.483104    3785 log.go:172] (0xc000a8e420) Go away received\nI0106 14:45:20.486699    3785 log.go:172] (0xc000a8e420) (0xc00037c6e0) Stream removed, broadcasting: 1\nI0106 14:45:20.486726    3785 log.go:172] (0xc000a8e420) (0xc0006863c0) Stream removed, broadcasting: 3\nI0106 14:45:20.486757    3785 log.go:172] (0xc000a8e420) (0xc0009e2000) Stream removed, broadcasting: 5\n"
Jan  6 14:45:20.497: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  6 14:45:20.497: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-1: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  6 14:45:20.497: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /usr/share/nginx/html/index.html /tmp/ || true'
Jan  6 14:45:21.094: INFO: stderr: "I0106 14:45:20.797581    3805 log.go:172] (0xc000a1e0b0) (0xc0009560a0) Create stream\nI0106 14:45:20.797777    3805 log.go:172] (0xc000a1e0b0) (0xc0009560a0) Stream added, broadcasting: 1\nI0106 14:45:20.802196    3805 log.go:172] (0xc000a1e0b0) Reply frame received for 1\nI0106 14:45:20.802228    3805 log.go:172] (0xc000a1e0b0) (0xc000a14000) Create stream\nI0106 14:45:20.802242    3805 log.go:172] (0xc000a1e0b0) (0xc000a14000) Stream added, broadcasting: 3\nI0106 14:45:20.803386    3805 log.go:172] (0xc000a1e0b0) Reply frame received for 3\nI0106 14:45:20.803417    3805 log.go:172] (0xc000a1e0b0) (0xc000956140) Create stream\nI0106 14:45:20.803432    3805 log.go:172] (0xc000a1e0b0) (0xc000956140) Stream added, broadcasting: 5\nI0106 14:45:20.804368    3805 log.go:172] (0xc000a1e0b0) Reply frame received for 5\nI0106 14:45:20.906761    3805 log.go:172] (0xc000a1e0b0) Data frame received for 5\nI0106 14:45:20.906944    3805 log.go:172] (0xc000956140) (5) Data frame handling\nI0106 14:45:20.906970    3805 log.go:172] (0xc000956140) (5) Data frame sent\n+ mv -v /usr/share/nginx/html/index.html /tmp/\nI0106 14:45:20.939184    3805 log.go:172] (0xc000a1e0b0) Data frame received for 3\nI0106 14:45:20.939236    3805 log.go:172] (0xc000a14000) (3) Data frame handling\nI0106 14:45:20.939264    3805 log.go:172] (0xc000a14000) (3) Data frame sent\nI0106 14:45:21.081491    3805 log.go:172] (0xc000a1e0b0) (0xc000a14000) Stream removed, broadcasting: 3\nI0106 14:45:21.081963    3805 log.go:172] (0xc000a1e0b0) Data frame received for 1\nI0106 14:45:21.081994    3805 log.go:172] (0xc0009560a0) (1) Data frame handling\nI0106 14:45:21.082020    3805 log.go:172] (0xc0009560a0) (1) Data frame sent\nI0106 14:45:21.082037    3805 log.go:172] (0xc000a1e0b0) (0xc0009560a0) Stream removed, broadcasting: 1\nI0106 14:45:21.082799    3805 log.go:172] (0xc000a1e0b0) (0xc000956140) Stream removed, broadcasting: 5\nI0106 14:45:21.082848    3805 log.go:172] (0xc000a1e0b0) Go away received\nI0106 14:45:21.083585    3805 log.go:172] (0xc000a1e0b0) (0xc0009560a0) Stream removed, broadcasting: 1\nI0106 14:45:21.083616    3805 log.go:172] (0xc000a1e0b0) (0xc000a14000) Stream removed, broadcasting: 3\nI0106 14:45:21.083633    3805 log.go:172] (0xc000a1e0b0) (0xc000956140) Stream removed, broadcasting: 5\n"
Jan  6 14:45:21.094: INFO: stdout: "'/usr/share/nginx/html/index.html' -> '/tmp/index.html'\n"
Jan  6 14:45:21.094: INFO: stdout of mv -v /usr/share/nginx/html/index.html /tmp/ || true on ss-2: '/usr/share/nginx/html/index.html' -> '/tmp/index.html'

Jan  6 14:45:21.094: INFO: Waiting for statefulset status.replicas updated to 0
Jan  6 14:45:21.099: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan  6 14:45:31.114: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan  6 14:45:31.114: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan  6 14:45:31.114: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan  6 14:45:31.189: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999469s
Jan  6 14:45:32.211: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.973883258s
Jan  6 14:45:33.230: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.952292104s
Jan  6 14:45:34.239: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.932697369s
Jan  6 14:45:35.249: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.924123046s
Jan  6 14:45:37.078: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.913967617s
Jan  6 14:45:38.088: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.08454562s
Jan  6 14:45:39.104: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.074907058s
Jan  6 14:45:40.115: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.059173713s
Jan  6 14:45:41.124: INFO: Verifying statefulset ss doesn't scale past 3 for another 48.441676ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-4208
Jan  6 14:45:42.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:45:42.807: INFO: stderr: "I0106 14:45:42.450934    3825 log.go:172] (0xc000116630) (0xc0009d6640) Create stream\nI0106 14:45:42.451300    3825 log.go:172] (0xc000116630) (0xc0009d6640) Stream added, broadcasting: 1\nI0106 14:45:42.458125    3825 log.go:172] (0xc000116630) Reply frame received for 1\nI0106 14:45:42.458210    3825 log.go:172] (0xc000116630) (0xc0005f2320) Create stream\nI0106 14:45:42.458246    3825 log.go:172] (0xc000116630) (0xc0005f2320) Stream added, broadcasting: 3\nI0106 14:45:42.461522    3825 log.go:172] (0xc000116630) Reply frame received for 3\nI0106 14:45:42.461592    3825 log.go:172] (0xc000116630) (0xc0005ee000) Create stream\nI0106 14:45:42.461627    3825 log.go:172] (0xc000116630) (0xc0005ee000) Stream added, broadcasting: 5\nI0106 14:45:42.465384    3825 log.go:172] (0xc000116630) Reply frame received for 5\nI0106 14:45:42.658182    3825 log.go:172] (0xc000116630) Data frame received for 5\nI0106 14:45:42.658327    3825 log.go:172] (0xc0005ee000) (5) Data frame handling\nI0106 14:45:42.658379    3825 log.go:172] (0xc0005ee000) (5) Data frame sent\nI0106 14:45:42.658422    3825 log.go:172] (0xc000116630) Data frame received for 3\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0106 14:45:42.658441    3825 log.go:172] (0xc0005f2320) (3) Data frame handling\nI0106 14:45:42.658474    3825 log.go:172] (0xc0005f2320) (3) Data frame sent\nI0106 14:45:42.790459    3825 log.go:172] (0xc000116630) (0xc0005f2320) Stream removed, broadcasting: 3\nI0106 14:45:42.790828    3825 log.go:172] (0xc000116630) Data frame received for 1\nI0106 14:45:42.790913    3825 log.go:172] (0xc0009d6640) (1) Data frame handling\nI0106 14:45:42.790958    3825 log.go:172] (0xc0009d6640) (1) Data frame sent\nI0106 14:45:42.791208    3825 log.go:172] (0xc000116630) (0xc0009d6640) Stream removed, broadcasting: 1\nI0106 14:45:42.791454    3825 log.go:172] (0xc000116630) (0xc0005ee000) Stream removed, broadcasting: 5\nI0106 14:45:42.791527    3825 log.go:172] (0xc000116630) Go away received\nI0106 14:45:42.793626    3825 log.go:172] (0xc000116630) (0xc0009d6640) Stream removed, broadcasting: 1\nI0106 14:45:42.793667    3825 log.go:172] (0xc000116630) (0xc0005f2320) Stream removed, broadcasting: 3\nI0106 14:45:42.793695    3825 log.go:172] (0xc000116630) (0xc0005ee000) Stream removed, broadcasting: 5\n"
Jan  6 14:45:42.807: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  6 14:45:42.807: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-0: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  6 14:45:42.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:45:43.169: INFO: stderr: "I0106 14:45:42.977143    3848 log.go:172] (0xc00091a370) (0xc0008de6e0) Create stream\nI0106 14:45:42.977303    3848 log.go:172] (0xc00091a370) (0xc0008de6e0) Stream added, broadcasting: 1\nI0106 14:45:42.980028    3848 log.go:172] (0xc00091a370) Reply frame received for 1\nI0106 14:45:42.980072    3848 log.go:172] (0xc00091a370) (0xc0006301e0) Create stream\nI0106 14:45:42.980082    3848 log.go:172] (0xc00091a370) (0xc0006301e0) Stream added, broadcasting: 3\nI0106 14:45:42.981835    3848 log.go:172] (0xc00091a370) Reply frame received for 3\nI0106 14:45:42.981923    3848 log.go:172] (0xc00091a370) (0xc0008de780) Create stream\nI0106 14:45:42.981934    3848 log.go:172] (0xc00091a370) (0xc0008de780) Stream added, broadcasting: 5\nI0106 14:45:42.987203    3848 log.go:172] (0xc00091a370) Reply frame received for 5\nI0106 14:45:43.080624    3848 log.go:172] (0xc00091a370) Data frame received for 5\nI0106 14:45:43.080683    3848 log.go:172] (0xc0008de780) (5) Data frame handling\nI0106 14:45:43.080699    3848 log.go:172] (0xc0008de780) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/share/nginx/html/\nI0106 14:45:43.080720    3848 log.go:172] (0xc00091a370) Data frame received for 3\nI0106 14:45:43.080726    3848 log.go:172] (0xc0006301e0) (3) Data frame handling\nI0106 14:45:43.080739    3848 log.go:172] (0xc0006301e0) (3) Data frame sent\nI0106 14:45:43.159303    3848 log.go:172] (0xc00091a370) Data frame received for 1\nI0106 14:45:43.159746    3848 log.go:172] (0xc00091a370) (0xc0006301e0) Stream removed, broadcasting: 3\nI0106 14:45:43.159951    3848 log.go:172] (0xc0008de6e0) (1) Data frame handling\nI0106 14:45:43.160030    3848 log.go:172] (0xc0008de6e0) (1) Data frame sent\nI0106 14:45:43.160057    3848 log.go:172] (0xc00091a370) (0xc0008de6e0) Stream removed, broadcasting: 1\nI0106 14:45:43.161648    3848 log.go:172] (0xc00091a370) (0xc0008de780) Stream removed, broadcasting: 5\nI0106 14:45:43.161754    3848 log.go:172] (0xc00091a370) Go away received\nI0106 14:45:43.161953    3848 log.go:172] (0xc00091a370) (0xc0008de6e0) Stream removed, broadcasting: 1\nI0106 14:45:43.161988    3848 log.go:172] (0xc00091a370) (0xc0006301e0) Stream removed, broadcasting: 3\nI0106 14:45:43.161996    3848 log.go:172] (0xc00091a370) (0xc0008de780) Stream removed, broadcasting: 5\n"
Jan  6 14:45:43.170: INFO: stdout: "'/tmp/index.html' -> '/usr/share/nginx/html/index.html'\n"
Jan  6 14:45:43.170: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-1: '/tmp/index.html' -> '/usr/share/nginx/html/index.html'

Jan  6 14:45:43.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:45:43.708: INFO: rc: 126
Jan  6 14:45:43.708: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []   cannot exec in a stopped state: unknown
 I0106 14:45:43.656043    3868 log.go:172] (0xc0007b4630) (0xc000430780) Create stream
I0106 14:45:43.656333    3868 log.go:172] (0xc0007b4630) (0xc000430780) Stream added, broadcasting: 1
I0106 14:45:43.662260    3868 log.go:172] (0xc0007b4630) Reply frame received for 1
I0106 14:45:43.662290    3868 log.go:172] (0xc0007b4630) (0xc00070c500) Create stream
I0106 14:45:43.662299    3868 log.go:172] (0xc0007b4630) (0xc00070c500) Stream added, broadcasting: 3
I0106 14:45:43.665347    3868 log.go:172] (0xc0007b4630) Reply frame received for 3
I0106 14:45:43.665370    3868 log.go:172] (0xc0007b4630) (0xc0008b6000) Create stream
I0106 14:45:43.665380    3868 log.go:172] (0xc0007b4630) (0xc0008b6000) Stream added, broadcasting: 5
I0106 14:45:43.666756    3868 log.go:172] (0xc0007b4630) Reply frame received for 5
I0106 14:45:43.698316    3868 log.go:172] (0xc0007b4630) Data frame received for 3
I0106 14:45:43.698596    3868 log.go:172] (0xc00070c500) (3) Data frame handling
I0106 14:45:43.698672    3868 log.go:172] (0xc00070c500) (3) Data frame sent
I0106 14:45:43.698790    3868 log.go:172] (0xc0007b4630) Data frame received for 1
I0106 14:45:43.698805    3868 log.go:172] (0xc000430780) (1) Data frame handling
I0106 14:45:43.698819    3868 log.go:172] (0xc000430780) (1) Data frame sent
I0106 14:45:43.699433    3868 log.go:172] (0xc0007b4630) (0xc000430780) Stream removed, broadcasting: 1
I0106 14:45:43.699819    3868 log.go:172] (0xc0007b4630) (0xc00070c500) Stream removed, broadcasting: 3
I0106 14:45:43.701761    3868 log.go:172] (0xc0007b4630) (0xc0008b6000) Stream removed, broadcasting: 5
I0106 14:45:43.701804    3868 log.go:172] (0xc0007b4630) (0xc000430780) Stream removed, broadcasting: 1
I0106 14:45:43.701818    3868 log.go:172] (0xc0007b4630) (0xc00070c500) Stream removed, broadcasting: 3
I0106 14:45:43.701826    3868 log.go:172] (0xc0007b4630) (0xc0008b6000) Stream removed, broadcasting: 5
command terminated with exit code 126
 []  0xc002f36090 exit status 126   true [0xc0007577f8 0xc000757930 0xc000757a98] [0xc0007577f8 0xc000757930 0xc000757a98] [0xc0007578f0 0xc000757988] [0xba6c50 0xba6c50] 0xc001d3c480 }:
Command stdout:
cannot exec in a stopped state: unknown

stderr:
I0106 14:45:43.656043    3868 log.go:172] (0xc0007b4630) (0xc000430780) Create stream
I0106 14:45:43.656333    3868 log.go:172] (0xc0007b4630) (0xc000430780) Stream added, broadcasting: 1
I0106 14:45:43.662260    3868 log.go:172] (0xc0007b4630) Reply frame received for 1
I0106 14:45:43.662290    3868 log.go:172] (0xc0007b4630) (0xc00070c500) Create stream
I0106 14:45:43.662299    3868 log.go:172] (0xc0007b4630) (0xc00070c500) Stream added, broadcasting: 3
I0106 14:45:43.665347    3868 log.go:172] (0xc0007b4630) Reply frame received for 3
I0106 14:45:43.665370    3868 log.go:172] (0xc0007b4630) (0xc0008b6000) Create stream
I0106 14:45:43.665380    3868 log.go:172] (0xc0007b4630) (0xc0008b6000) Stream added, broadcasting: 5
I0106 14:45:43.666756    3868 log.go:172] (0xc0007b4630) Reply frame received for 5
I0106 14:45:43.698316    3868 log.go:172] (0xc0007b4630) Data frame received for 3
I0106 14:45:43.698596    3868 log.go:172] (0xc00070c500) (3) Data frame handling
I0106 14:45:43.698672    3868 log.go:172] (0xc00070c500) (3) Data frame sent
I0106 14:45:43.698790    3868 log.go:172] (0xc0007b4630) Data frame received for 1
I0106 14:45:43.698805    3868 log.go:172] (0xc000430780) (1) Data frame handling
I0106 14:45:43.698819    3868 log.go:172] (0xc000430780) (1) Data frame sent
I0106 14:45:43.699433    3868 log.go:172] (0xc0007b4630) (0xc000430780) Stream removed, broadcasting: 1
I0106 14:45:43.699819    3868 log.go:172] (0xc0007b4630) (0xc00070c500) Stream removed, broadcasting: 3
I0106 14:45:43.701761    3868 log.go:172] (0xc0007b4630) (0xc0008b6000) Stream removed, broadcasting: 5
I0106 14:45:43.701804    3868 log.go:172] (0xc0007b4630) (0xc000430780) Stream removed, broadcasting: 1
I0106 14:45:43.701818    3868 log.go:172] (0xc0007b4630) (0xc00070c500) Stream removed, broadcasting: 3
I0106 14:45:43.701826    3868 log.go:172] (0xc0007b4630) (0xc0008b6000) Stream removed, broadcasting: 5
command terminated with exit code 126

error:
exit status 126
Jan  6 14:45:53.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:45:53.929: INFO: rc: 1
Jan  6 14:45:53.929: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f36150 exit status 1   true [0xc000757b08 0xc000757c60 0xc000757e80] [0xc000757b08 0xc000757c60 0xc000757e80] [0xc000757bc0 0xc000757df8] [0xba6c50 0xba6c50] 0xc001d3d860 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:46:03.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:46:04.101: INFO: rc: 1
Jan  6 14:46:04.102: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002411ad0 exit status 1   true [0xc0006a56f8 0xc0006a58b8 0xc0006a59b8] [0xc0006a56f8 0xc0006a58b8 0xc0006a59b8] [0xc0006a5808 0xc0006a5948] [0xba6c50 0xba6c50] 0xc001ee3440 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:46:14.103: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:46:14.264: INFO: rc: 1
Jan  6 14:46:14.264: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f36240 exit status 1   true [0xc000757ef0 0xc001242008 0xc001242030] [0xc000757ef0 0xc001242008 0xc001242030] [0xc000757ff0 0xc001242028] [0xba6c50 0xba6c50] 0xc0018aa3c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:46:24.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:46:24.474: INFO: rc: 1
Jan  6 14:46:24.474: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f36330 exit status 1   true [0xc001242038 0xc001242078 0xc0012420a8] [0xc001242038 0xc001242078 0xc0012420a8] [0xc001242070 0xc001242088] [0xba6c50 0xba6c50] 0xc0018ab500 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:46:34.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:46:34.666: INFO: rc: 1
Jan  6 14:46:34.666: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028c40c0 exit status 1   true [0xc0019a8000 0xc0019a8040 0xc0019a8070] [0xc0019a8000 0xc0019a8040 0xc0019a8070] [0xc0019a8020 0xc0019a8060] [0xba6c50 0xba6c50] 0xc0028d2060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:46:44.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:46:44.814: INFO: rc: 1
Jan  6 14:46:44.814: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f36420 exit status 1   true [0xc0012420c0 0xc0012420d8 0xc001242118] [0xc0012420c0 0xc0012420d8 0xc001242118] [0xc0012420d0 0xc001242110] [0xba6c50 0xba6c50] 0xc000ca32c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:46:54.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:46:54.946: INFO: rc: 1
Jan  6 14:46:54.946: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00176f980 exit status 1   true [0xc000877020 0xc0008771a0 0xc000877390] [0xc000877020 0xc0008771a0 0xc000877390] [0xc000877130 0xc000877370] [0xba6c50 0xba6c50] 0xc002f5fec0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:47:04.946: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:47:05.148: INFO: rc: 1
Jan  6 14:47:05.148: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002411bc0 exit status 1   true [0xc0006a5a68 0xc0006a5b18 0xc0006a5e08] [0xc0006a5a68 0xc0006a5b18 0xc0006a5e08] [0xc0006a5af8 0xc0006a5d88] [0xba6c50 0xba6c50] 0xc001ee3b60 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:47:15.148: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:47:15.284: INFO: rc: 1
Jan  6 14:47:15.284: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002411c80 exit status 1   true [0xc0006a5ec8 0xc0024e6000 0xc0024e6018] [0xc0006a5ec8 0xc0024e6000 0xc0024e6018] [0xc0006a5fe8 0xc0024e6010] [0xba6c50 0xba6c50] 0xc0023ee960 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:47:25.285: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:47:25.440: INFO: rc: 1
Jan  6 14:47:25.440: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002411da0 exit status 1   true [0xc0024e6020 0xc0024e6038 0xc0024e6050] [0xc0024e6020 0xc0024e6038 0xc0024e6050] [0xc0024e6030 0xc0024e6048] [0xba6c50 0xba6c50] 0xc0023ef620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:47:35.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:47:35.639: INFO: rc: 1
Jan  6 14:47:35.639: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00279e090 exit status 1   true [0xc000186018 0xc0006a50e0 0xc0006a5260] [0xc000186018 0xc0006a50e0 0xc0006a5260] [0xc0006a4fa0 0xc0006a5158] [0xba6c50 0xba6c50] 0xc0018aa060 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:47:45.640: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:47:45.827: INFO: rc: 1
Jan  6 14:47:45.827: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00176e0c0 exit status 1   true [0xc000756098 0xc000756140 0xc0007562e8] [0xc000756098 0xc000756140 0xc0007562e8] [0xc0007560c8 0xc000756240] [0xba6c50 0xba6c50] 0xc001d3d200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:47:55.828: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:47:55.958: INFO: rc: 1
Jan  6 14:47:55.958: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00279e1b0 exit status 1   true [0xc0006a52d8 0xc0006a53d0 0xc0006a5648] [0xc0006a52d8 0xc0006a53d0 0xc0006a5648] [0xc0006a53b0 0xc0006a5540] [0xba6c50 0xba6c50] 0xc0018aade0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:48:05.959: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:48:06.148: INFO: rc: 1
Jan  6 14:48:06.149: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00279e2a0 exit status 1   true [0xc0006a56f8 0xc0006a58b8 0xc0006a59b8] [0xc0006a56f8 0xc0006a58b8 0xc0006a59b8] [0xc0006a5808 0xc0006a5948] [0xba6c50 0xba6c50] 0xc0018abbc0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:48:16.149: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:48:16.326: INFO: rc: 1
Jan  6 14:48:16.326: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00176e1e0 exit status 1   true [0xc000756400 0xc0007566a8 0xc000756b28] [0xc000756400 0xc0007566a8 0xc000756b28] [0xc0007565c0 0xc000756948] [0xba6c50 0xba6c50] 0xc001ee2000 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:48:26.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:48:26.562: INFO: rc: 1
Jan  6 14:48:26.563: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028c40f0 exit status 1   true [0xc000876070 0xc000876630 0xc000876698] [0xc000876070 0xc000876630 0xc000876698] [0xc000876430 0xc000876668] [0xba6c50 0xba6c50] 0xc001f6a480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:48:36.564: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:48:36.695: INFO: rc: 1
Jan  6 14:48:36.695: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0028c41b0 exit status 1   true [0xc0008766d8 0xc000876850 0xc000876a68] [0xc0008766d8 0xc000876850 0xc000876a68] [0xc0008767d0 0xc0008769d8] [0xba6c50 0xba6c50] 0xc001f6ade0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:48:46.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:48:46.897: INFO: rc: 1
Jan  6 14:48:46.897: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00279e390 exit status 1   true [0xc0006a5a68 0xc0006a5b18 0xc0006a5e08] [0xc0006a5a68 0xc0006a5b18 0xc0006a5e08] [0xc0006a5af8 0xc0006a5d88] [0xba6c50 0xba6c50] 0xc002f5e480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:48:56.898: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:48:57.034: INFO: rc: 1
Jan  6 14:48:57.034: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002410150 exit status 1   true [0xc0019a8000 0xc0019a8040 0xc0019a8070] [0xc0019a8000 0xc0019a8040 0xc0019a8070] [0xc0019a8020 0xc0019a8060] [0xba6c50 0xba6c50] 0xc0028d2240 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:49:07.034: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:49:07.218: INFO: rc: 1
Jan  6 14:49:07.218: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002410210 exit status 1   true [0xc0019a8080 0xc0019a80c0 0xc0019a80d8] [0xc0019a8080 0xc0019a80c0 0xc0019a80d8] [0xc0019a80b8 0xc0019a80d0] [0xba6c50 0xba6c50] 0xc0028d2600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:49:17.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:49:17.329: INFO: rc: 1
Jan  6 14:49:17.329: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002410300 exit status 1   true [0xc0019a80e0 0xc0019a80f8 0xc0019a8110] [0xc0019a80e0 0xc0019a80f8 0xc0019a8110] [0xc0019a80f0 0xc0019a8108] [0xba6c50 0xba6c50] 0xc0028d2a80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:49:27.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:49:27.659: INFO: rc: 1
Jan  6 14:49:27.659: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0024103c0 exit status 1   true [0xc0019a8118 0xc0019a8130 0xc0019a8148] [0xc0019a8118 0xc0019a8130 0xc0019a8148] [0xc0019a8128 0xc0019a8140] [0xba6c50 0xba6c50] 0xc0028d2ea0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:49:37.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:49:37.974: INFO: rc: 1
Jan  6 14:49:37.974: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f360c0 exit status 1   true [0xc0024e6008 0xc0024e6020 0xc0024e6038] [0xc0024e6008 0xc0024e6020 0xc0024e6038] [0xc0024e6018 0xc0024e6030] [0xba6c50 0xba6c50] 0xc0023eed80 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:49:47.975: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:49:48.112: INFO: rc: 1
Jan  6 14:49:48.112: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00279e0c0 exit status 1   true [0xc000186000 0xc0024e6040 0xc0024e6058] [0xc000186000 0xc0024e6040 0xc0024e6058] [0xc000186498 0xc0024e6050] [0xba6c50 0xba6c50] 0xc001d3d080 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:49:58.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:49:58.234: INFO: rc: 1
Jan  6 14:49:58.235: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f36180 exit status 1   true [0xc0006a4fa0 0xc0006a5158 0xc0006a5310] [0xc0006a4fa0 0xc0006a5158 0xc0006a5310] [0xc0006a5138 0xc0006a52d8] [0xba6c50 0xba6c50] 0xc0018aa7e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:50:08.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:50:08.456: INFO: rc: 1
Jan  6 14:50:08.456: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc0024100c0 exit status 1   true [0xc000756098 0xc000756140 0xc0007562e8] [0xc000756098 0xc000756140 0xc0007562e8] [0xc0007560c8 0xc000756240] [0xba6c50 0xba6c50] 0xc0023ef1a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:50:18.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:50:18.676: INFO: rc: 1
Jan  6 14:50:18.676: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc002f36270 exit status 1   true [0xc0006a53b0 0xc0006a5540 0xc0006a57c0] [0xc0006a53b0 0xc0006a5540 0xc0006a57c0] [0xc0006a5400 0xc0006a56f8] [0xba6c50 0xba6c50] 0xc0018ab920 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:50:28.676: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:50:28.882: INFO: rc: 1
Jan  6 14:50:28.883: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00279e1e0 exit status 1   true [0xc0024e6060 0xc0024e6078 0xc0024e6090] [0xc0024e6060 0xc0024e6078 0xc0024e6090] [0xc0024e6070 0xc0024e6088] [0xba6c50 0xba6c50] 0xc002f5e0c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:50:38.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:50:39.031: INFO: rc: 1
Jan  6 14:50:39.031: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true] []    Error from server (NotFound): pods "ss-2" not found
 []  0xc00279e2d0 exit status 1   true [0xc0024e6098 0xc0024e60b0 0xc0024e60c8] [0xc0024e6098 0xc0024e60b0 0xc0024e60c8] [0xc0024e60a8 0xc0024e60c0] [0xba6c50 0xba6c50] 0xc002f5e600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-2" not found

error:
exit status 1
Jan  6 14:50:49.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4208 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/share/nginx/html/ || true'
Jan  6 14:50:49.195: INFO: rc: 1
Jan  6 14:50:49.196: INFO: stdout of mv -v /tmp/index.html /usr/share/nginx/html/ || true on ss-2: 
Jan  6 14:50:49.196: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:86
Jan  6 14:50:49.210: INFO: Deleting all statefulset in ns statefulset-4208
Jan  6 14:50:49.215: INFO: Scaling statefulset ss to 0
Jan  6 14:50:49.222: INFO: Waiting for statefulset status.replicas updated to 0
Jan  6 14:50:49.225: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:50:49.246: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4208" for this suite.
Jan  6 14:50:55.328: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:50:55.471: INFO: namespace statefulset-4208 deletion completed in 6.221510852s

• [SLOW TEST:390.215 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:50:55.471: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-9007
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  6 14:50:55.569: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  6 14:51:29.856: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-9007 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 14:51:29.856: INFO: >>> kubeConfig: /root/.kube/config
I0106 14:51:29.953384       8 log.go:172] (0xc002e86a50) (0xc002ac0780) Create stream
I0106 14:51:29.953466       8 log.go:172] (0xc002e86a50) (0xc002ac0780) Stream added, broadcasting: 1
I0106 14:51:29.963383       8 log.go:172] (0xc002e86a50) Reply frame received for 1
I0106 14:51:29.963460       8 log.go:172] (0xc002e86a50) (0xc00186a6e0) Create stream
I0106 14:51:29.963470       8 log.go:172] (0xc002e86a50) (0xc00186a6e0) Stream added, broadcasting: 3
I0106 14:51:29.965397       8 log.go:172] (0xc002e86a50) Reply frame received for 3
I0106 14:51:29.965447       8 log.go:172] (0xc002e86a50) (0xc00186a780) Create stream
I0106 14:51:29.965462       8 log.go:172] (0xc002e86a50) (0xc00186a780) Stream added, broadcasting: 5
I0106 14:51:29.968420       8 log.go:172] (0xc002e86a50) Reply frame received for 5
I0106 14:51:30.150441       8 log.go:172] (0xc002e86a50) Data frame received for 3
I0106 14:51:30.150631       8 log.go:172] (0xc00186a6e0) (3) Data frame handling
I0106 14:51:30.150688       8 log.go:172] (0xc00186a6e0) (3) Data frame sent
I0106 14:51:30.292040       8 log.go:172] (0xc002e86a50) Data frame received for 1
I0106 14:51:30.292147       8 log.go:172] (0xc002e86a50) (0xc00186a6e0) Stream removed, broadcasting: 3
I0106 14:51:30.292348       8 log.go:172] (0xc002ac0780) (1) Data frame handling
I0106 14:51:30.292381       8 log.go:172] (0xc002ac0780) (1) Data frame sent
I0106 14:51:30.292462       8 log.go:172] (0xc002e86a50) (0xc00186a780) Stream removed, broadcasting: 5
I0106 14:51:30.292805       8 log.go:172] (0xc002e86a50) (0xc002ac0780) Stream removed, broadcasting: 1
I0106 14:51:30.293002       8 log.go:172] (0xc002e86a50) Go away received
I0106 14:51:30.293783       8 log.go:172] (0xc002e86a50) (0xc002ac0780) Stream removed, broadcasting: 1
I0106 14:51:30.293843       8 log.go:172] (0xc002e86a50) (0xc00186a6e0) Stream removed, broadcasting: 3
I0106 14:51:30.293852       8 log.go:172] (0xc002e86a50) (0xc00186a780) Stream removed, broadcasting: 5
Jan  6 14:51:30.293: INFO: Waiting for endpoints: map[]
Jan  6 14:51:30.306: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-9007 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 14:51:30.306: INFO: >>> kubeConfig: /root/.kube/config
I0106 14:51:30.383436       8 log.go:172] (0xc002e87600) (0xc002ac0d20) Create stream
I0106 14:51:30.383512       8 log.go:172] (0xc002e87600) (0xc002ac0d20) Stream added, broadcasting: 1
I0106 14:51:30.388630       8 log.go:172] (0xc002e87600) Reply frame received for 1
I0106 14:51:30.388685       8 log.go:172] (0xc002e87600) (0xc002fb0960) Create stream
I0106 14:51:30.388703       8 log.go:172] (0xc002e87600) (0xc002fb0960) Stream added, broadcasting: 3
I0106 14:51:30.390144       8 log.go:172] (0xc002e87600) Reply frame received for 3
I0106 14:51:30.390173       8 log.go:172] (0xc002e87600) (0xc002ac0dc0) Create stream
I0106 14:51:30.390183       8 log.go:172] (0xc002e87600) (0xc002ac0dc0) Stream added, broadcasting: 5
I0106 14:51:30.391906       8 log.go:172] (0xc002e87600) Reply frame received for 5
I0106 14:51:30.571420       8 log.go:172] (0xc002e87600) Data frame received for 3
I0106 14:51:30.571494       8 log.go:172] (0xc002fb0960) (3) Data frame handling
I0106 14:51:30.571526       8 log.go:172] (0xc002fb0960) (3) Data frame sent
I0106 14:51:30.802727       8 log.go:172] (0xc002e87600) Data frame received for 1
I0106 14:51:30.802864       8 log.go:172] (0xc002e87600) (0xc002fb0960) Stream removed, broadcasting: 3
I0106 14:51:30.802942       8 log.go:172] (0xc002ac0d20) (1) Data frame handling
I0106 14:51:30.802984       8 log.go:172] (0xc002e87600) (0xc002ac0dc0) Stream removed, broadcasting: 5
I0106 14:51:30.803013       8 log.go:172] (0xc002ac0d20) (1) Data frame sent
I0106 14:51:30.803035       8 log.go:172] (0xc002e87600) (0xc002ac0d20) Stream removed, broadcasting: 1
I0106 14:51:30.803116       8 log.go:172] (0xc002e87600) Go away received
I0106 14:51:30.803463       8 log.go:172] (0xc002e87600) (0xc002ac0d20) Stream removed, broadcasting: 1
I0106 14:51:30.803483       8 log.go:172] (0xc002e87600) (0xc002fb0960) Stream removed, broadcasting: 3
I0106 14:51:30.803492       8 log.go:172] (0xc002e87600) (0xc002ac0dc0) Stream removed, broadcasting: 5
Jan  6 14:51:30.803: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:51:30.803: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9007" for this suite.
Jan  6 14:51:54.847: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:51:54.990: INFO: namespace pod-network-test-9007 deletion completed in 24.176683575s

• [SLOW TEST:59.519 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:51:54.990: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan  6 14:52:02.294: INFO: 6 pods remaining
Jan  6 14:52:02.294: INFO: 0 pods has nil DeletionTimestamp
Jan  6 14:52:02.295: INFO: 
STEP: Gathering metrics
W0106 14:52:03.059598       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan  6 14:52:03.059: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:52:03.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4093" for this suite.
Jan  6 14:52:15.189: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:52:15.300: INFO: namespace gc-4093 deletion completed in 12.238869414s

• [SLOW TEST:20.310 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:52:15.301: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test override all
Jan  6 14:52:15.530: INFO: Waiting up to 5m0s for pod "client-containers-a68df077-13d0-4912-95d0-385ab837913f" in namespace "containers-5172" to be "success or failure"
Jan  6 14:52:15.621: INFO: Pod "client-containers-a68df077-13d0-4912-95d0-385ab837913f": Phase="Pending", Reason="", readiness=false. Elapsed: 91.176653ms
Jan  6 14:52:17.630: INFO: Pod "client-containers-a68df077-13d0-4912-95d0-385ab837913f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.10054226s
Jan  6 14:52:19.638: INFO: Pod "client-containers-a68df077-13d0-4912-95d0-385ab837913f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.108694s
Jan  6 14:52:21.647: INFO: Pod "client-containers-a68df077-13d0-4912-95d0-385ab837913f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.117495516s
Jan  6 14:52:23.659: INFO: Pod "client-containers-a68df077-13d0-4912-95d0-385ab837913f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.129053243s
STEP: Saw pod success
Jan  6 14:52:23.659: INFO: Pod "client-containers-a68df077-13d0-4912-95d0-385ab837913f" satisfied condition "success or failure"
Jan  6 14:52:23.662: INFO: Trying to get logs from node iruya-node pod client-containers-a68df077-13d0-4912-95d0-385ab837913f container test-container: 
STEP: delete the pod
Jan  6 14:52:23.745: INFO: Waiting for pod client-containers-a68df077-13d0-4912-95d0-385ab837913f to disappear
Jan  6 14:52:23.757: INFO: Pod client-containers-a68df077-13d0-4912-95d0-385ab837913f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:52:23.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-5172" for this suite.
Jan  6 14:52:29.789: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:52:29.968: INFO: namespace containers-5172 deletion completed in 6.204351706s

• [SLOW TEST:14.667 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:52:29.968: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5333.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5333.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5333.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5333.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  6 14:52:42.171: INFO: File wheezy_udp@dns-test-service-3.dns-5333.svc.cluster.local from pod  dns-5333/dns-test-3969977d-5520-4edb-81b4-b30b2a2d8398 contains '' instead of 'foo.example.com.'
Jan  6 14:52:42.180: INFO: File jessie_udp@dns-test-service-3.dns-5333.svc.cluster.local from pod  dns-5333/dns-test-3969977d-5520-4edb-81b4-b30b2a2d8398 contains '' instead of 'foo.example.com.'
Jan  6 14:52:42.180: INFO: Lookups using dns-5333/dns-test-3969977d-5520-4edb-81b4-b30b2a2d8398 failed for: [wheezy_udp@dns-test-service-3.dns-5333.svc.cluster.local jessie_udp@dns-test-service-3.dns-5333.svc.cluster.local]

Jan  6 14:52:47.202: INFO: DNS probes using dns-test-3969977d-5520-4edb-81b4-b30b2a2d8398 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5333.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5333.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5333.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5333.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  6 14:53:01.479: INFO: File wheezy_udp@dns-test-service-3.dns-5333.svc.cluster.local from pod  dns-5333/dns-test-0e1cf64b-a147-4222-9ed6-44432200767b contains '' instead of 'bar.example.com.'
Jan  6 14:53:01.485: INFO: File jessie_udp@dns-test-service-3.dns-5333.svc.cluster.local from pod  dns-5333/dns-test-0e1cf64b-a147-4222-9ed6-44432200767b contains '' instead of 'bar.example.com.'
Jan  6 14:53:01.485: INFO: Lookups using dns-5333/dns-test-0e1cf64b-a147-4222-9ed6-44432200767b failed for: [wheezy_udp@dns-test-service-3.dns-5333.svc.cluster.local jessie_udp@dns-test-service-3.dns-5333.svc.cluster.local]

Jan  6 14:53:06.507: INFO: File wheezy_udp@dns-test-service-3.dns-5333.svc.cluster.local from pod  dns-5333/dns-test-0e1cf64b-a147-4222-9ed6-44432200767b contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  6 14:53:06.517: INFO: File jessie_udp@dns-test-service-3.dns-5333.svc.cluster.local from pod  dns-5333/dns-test-0e1cf64b-a147-4222-9ed6-44432200767b contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  6 14:53:06.517: INFO: Lookups using dns-5333/dns-test-0e1cf64b-a147-4222-9ed6-44432200767b failed for: [wheezy_udp@dns-test-service-3.dns-5333.svc.cluster.local jessie_udp@dns-test-service-3.dns-5333.svc.cluster.local]

Jan  6 14:53:11.502: INFO: File wheezy_udp@dns-test-service-3.dns-5333.svc.cluster.local from pod  dns-5333/dns-test-0e1cf64b-a147-4222-9ed6-44432200767b contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  6 14:53:11.514: INFO: File jessie_udp@dns-test-service-3.dns-5333.svc.cluster.local from pod  dns-5333/dns-test-0e1cf64b-a147-4222-9ed6-44432200767b contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan  6 14:53:11.514: INFO: Lookups using dns-5333/dns-test-0e1cf64b-a147-4222-9ed6-44432200767b failed for: [wheezy_udp@dns-test-service-3.dns-5333.svc.cluster.local jessie_udp@dns-test-service-3.dns-5333.svc.cluster.local]

Jan  6 14:53:16.867: INFO: DNS probes using dns-test-0e1cf64b-a147-4222-9ed6-44432200767b succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5333.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5333.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5333.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5333.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  6 14:53:31.304: INFO: File wheezy_udp@dns-test-service-3.dns-5333.svc.cluster.local from pod  dns-5333/dns-test-d484a5e9-fb39-4c65-8f14-752622385ed6 contains '' instead of '10.102.1.102'
Jan  6 14:53:31.320: INFO: File jessie_udp@dns-test-service-3.dns-5333.svc.cluster.local from pod  dns-5333/dns-test-d484a5e9-fb39-4c65-8f14-752622385ed6 contains '' instead of '10.102.1.102'
Jan  6 14:53:31.320: INFO: Lookups using dns-5333/dns-test-d484a5e9-fb39-4c65-8f14-752622385ed6 failed for: [wheezy_udp@dns-test-service-3.dns-5333.svc.cluster.local jessie_udp@dns-test-service-3.dns-5333.svc.cluster.local]

Jan  6 14:53:36.347: INFO: DNS probes using dns-test-d484a5e9-fb39-4c65-8f14-752622385ed6 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:53:36.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5333" for this suite.
Jan  6 14:53:44.583: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:53:44.681: INFO: namespace dns-5333 deletion completed in 8.154819933s

• [SLOW TEST:74.714 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:53:44.682: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9468.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-9468.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9468.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-9468.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-9468.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9468.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  6 14:53:56.804: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9468/dns-test-3b03aa09-911a-4be5-aa48-02731af0f537: the server could not find the requested resource (get pods dns-test-3b03aa09-911a-4be5-aa48-02731af0f537)
Jan  6 14:53:56.810: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9468/dns-test-3b03aa09-911a-4be5-aa48-02731af0f537: the server could not find the requested resource (get pods dns-test-3b03aa09-911a-4be5-aa48-02731af0f537)
Jan  6 14:53:56.817: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-9468.svc.cluster.local from pod dns-9468/dns-test-3b03aa09-911a-4be5-aa48-02731af0f537: the server could not find the requested resource (get pods dns-test-3b03aa09-911a-4be5-aa48-02731af0f537)
Jan  6 14:53:56.824: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-9468/dns-test-3b03aa09-911a-4be5-aa48-02731af0f537: the server could not find the requested resource (get pods dns-test-3b03aa09-911a-4be5-aa48-02731af0f537)
Jan  6 14:53:56.828: INFO: Unable to read jessie_udp@PodARecord from pod dns-9468/dns-test-3b03aa09-911a-4be5-aa48-02731af0f537: the server could not find the requested resource (get pods dns-test-3b03aa09-911a-4be5-aa48-02731af0f537)
Jan  6 14:53:56.835: INFO: Unable to read jessie_tcp@PodARecord from pod dns-9468/dns-test-3b03aa09-911a-4be5-aa48-02731af0f537: the server could not find the requested resource (get pods dns-test-3b03aa09-911a-4be5-aa48-02731af0f537)
Jan  6 14:53:56.835: INFO: Lookups using dns-9468/dns-test-3b03aa09-911a-4be5-aa48-02731af0f537 failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-9468.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  6 14:54:01.950: INFO: DNS probes using dns-9468/dns-test-3b03aa09-911a-4be5-aa48-02731af0f537 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:54:02.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-9468" for this suite.
Jan  6 14:54:08.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:54:08.268: INFO: namespace dns-9468 deletion completed in 6.228157202s

• [SLOW TEST:23.586 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:54:08.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  6 14:54:08.413: INFO: Creating ReplicaSet my-hostname-basic-5fec705d-06ef-4057-8d71-f635471370f9
Jan  6 14:54:08.440: INFO: Pod name my-hostname-basic-5fec705d-06ef-4057-8d71-f635471370f9: Found 0 pods out of 1
Jan  6 14:54:13.452: INFO: Pod name my-hostname-basic-5fec705d-06ef-4057-8d71-f635471370f9: Found 1 pods out of 1
Jan  6 14:54:13.452: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-5fec705d-06ef-4057-8d71-f635471370f9" is running
Jan  6 14:54:15.467: INFO: Pod "my-hostname-basic-5fec705d-06ef-4057-8d71-f635471370f9-wwf8p" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-06 14:54:08 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-06 14:54:08 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5fec705d-06ef-4057-8d71-f635471370f9]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-06 14:54:08 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-5fec705d-06ef-4057-8d71-f635471370f9]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-06 14:54:08 +0000 UTC Reason: Message:}])
Jan  6 14:54:15.467: INFO: Trying to dial the pod
Jan  6 14:54:20.584: INFO: Controller my-hostname-basic-5fec705d-06ef-4057-8d71-f635471370f9: Got expected result from replica 1 [my-hostname-basic-5fec705d-06ef-4057-8d71-f635471370f9-wwf8p]: "my-hostname-basic-5fec705d-06ef-4057-8d71-f635471370f9-wwf8p", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:54:20.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-8876" for this suite.
Jan  6 14:54:26.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:54:26.773: INFO: namespace replicaset-8876 deletion completed in 6.17760182s

• [SLOW TEST:18.504 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:54:26.773: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan  6 14:54:38.951: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-b8f134ba-41ff-4ad0-9b59-fd460aa973be -c busybox-main-container --namespace=emptydir-5817 -- cat /usr/share/volumeshare/shareddata.txt'
Jan  6 14:54:41.734: INFO: stderr: "I0106 14:54:41.429481    4442 log.go:172] (0xc00013ae70) (0xc000730820) Create stream\nI0106 14:54:41.429614    4442 log.go:172] (0xc00013ae70) (0xc000730820) Stream added, broadcasting: 1\nI0106 14:54:41.440279    4442 log.go:172] (0xc00013ae70) Reply frame received for 1\nI0106 14:54:41.440493    4442 log.go:172] (0xc00013ae70) (0xc0005f2280) Create stream\nI0106 14:54:41.440511    4442 log.go:172] (0xc00013ae70) (0xc0005f2280) Stream added, broadcasting: 3\nI0106 14:54:41.443288    4442 log.go:172] (0xc00013ae70) Reply frame received for 3\nI0106 14:54:41.443362    4442 log.go:172] (0xc00013ae70) (0xc0008e6000) Create stream\nI0106 14:54:41.443383    4442 log.go:172] (0xc00013ae70) (0xc0008e6000) Stream added, broadcasting: 5\nI0106 14:54:41.445571    4442 log.go:172] (0xc00013ae70) Reply frame received for 5\nI0106 14:54:41.549939    4442 log.go:172] (0xc00013ae70) Data frame received for 3\nI0106 14:54:41.550296    4442 log.go:172] (0xc0005f2280) (3) Data frame handling\nI0106 14:54:41.550397    4442 log.go:172] (0xc0005f2280) (3) Data frame sent\nI0106 14:54:41.723558    4442 log.go:172] (0xc00013ae70) (0xc0005f2280) Stream removed, broadcasting: 3\nI0106 14:54:41.723801    4442 log.go:172] (0xc00013ae70) Data frame received for 1\nI0106 14:54:41.723915    4442 log.go:172] (0xc00013ae70) (0xc0008e6000) Stream removed, broadcasting: 5\nI0106 14:54:41.723984    4442 log.go:172] (0xc000730820) (1) Data frame handling\nI0106 14:54:41.724032    4442 log.go:172] (0xc000730820) (1) Data frame sent\nI0106 14:54:41.724054    4442 log.go:172] (0xc00013ae70) (0xc000730820) Stream removed, broadcasting: 1\nI0106 14:54:41.724081    4442 log.go:172] (0xc00013ae70) Go away received\nI0106 14:54:41.725434    4442 log.go:172] (0xc00013ae70) (0xc000730820) Stream removed, broadcasting: 1\nI0106 14:54:41.725458    4442 log.go:172] (0xc00013ae70) (0xc0005f2280) Stream removed, broadcasting: 3\nI0106 14:54:41.725468    4442 log.go:172] (0xc00013ae70) (0xc0008e6000) Stream removed, broadcasting: 5\n"
Jan  6 14:54:41.735: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:54:41.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5817" for this suite.
Jan  6 14:54:47.814: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:54:47.978: INFO: namespace emptydir-5817 deletion completed in 6.23623785s

• [SLOW TEST:21.205 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:54:47.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan  6 14:54:48.427: INFO: Waiting up to 5m0s for pod "pod-99d721e1-2007-465f-be0c-fd5e16dcd57d" in namespace "emptydir-4480" to be "success or failure"
Jan  6 14:54:48.528: INFO: Pod "pod-99d721e1-2007-465f-be0c-fd5e16dcd57d": Phase="Pending", Reason="", readiness=false. Elapsed: 100.744657ms
Jan  6 14:54:50.550: INFO: Pod "pod-99d721e1-2007-465f-be0c-fd5e16dcd57d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122774508s
Jan  6 14:54:52.586: INFO: Pod "pod-99d721e1-2007-465f-be0c-fd5e16dcd57d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159009486s
Jan  6 14:54:54.605: INFO: Pod "pod-99d721e1-2007-465f-be0c-fd5e16dcd57d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177862591s
Jan  6 14:54:56.619: INFO: Pod "pod-99d721e1-2007-465f-be0c-fd5e16dcd57d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.191457307s
Jan  6 14:54:58.633: INFO: Pod "pod-99d721e1-2007-465f-be0c-fd5e16dcd57d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.205395016s
Jan  6 14:55:00.647: INFO: Pod "pod-99d721e1-2007-465f-be0c-fd5e16dcd57d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.220107954s
STEP: Saw pod success
Jan  6 14:55:00.647: INFO: Pod "pod-99d721e1-2007-465f-be0c-fd5e16dcd57d" satisfied condition "success or failure"
Jan  6 14:55:00.654: INFO: Trying to get logs from node iruya-node pod pod-99d721e1-2007-465f-be0c-fd5e16dcd57d container test-container: 
STEP: delete the pod
Jan  6 14:55:01.355: INFO: Waiting for pod pod-99d721e1-2007-465f-be0c-fd5e16dcd57d to disappear
Jan  6 14:55:01.517: INFO: Pod pod-99d721e1-2007-465f-be0c-fd5e16dcd57d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:55:01.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4480" for this suite.
Jan  6 14:55:07.556: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:55:07.729: INFO: namespace emptydir-4480 deletion completed in 6.201891919s

• [SLOW TEST:19.751 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:55:07.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  6 14:55:07.835: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4f59fd5f-eafb-4337-9f9c-a14db807538c" in namespace "projected-2943" to be "success or failure"
Jan  6 14:55:07.895: INFO: Pod "downwardapi-volume-4f59fd5f-eafb-4337-9f9c-a14db807538c": Phase="Pending", Reason="", readiness=false. Elapsed: 59.633088ms
Jan  6 14:55:09.902: INFO: Pod "downwardapi-volume-4f59fd5f-eafb-4337-9f9c-a14db807538c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066217073s
Jan  6 14:55:11.912: INFO: Pod "downwardapi-volume-4f59fd5f-eafb-4337-9f9c-a14db807538c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076645421s
Jan  6 14:55:13.932: INFO: Pod "downwardapi-volume-4f59fd5f-eafb-4337-9f9c-a14db807538c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096419456s
Jan  6 14:55:15.944: INFO: Pod "downwardapi-volume-4f59fd5f-eafb-4337-9f9c-a14db807538c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.108748657s
STEP: Saw pod success
Jan  6 14:55:15.944: INFO: Pod "downwardapi-volume-4f59fd5f-eafb-4337-9f9c-a14db807538c" satisfied condition "success or failure"
Jan  6 14:55:15.948: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-4f59fd5f-eafb-4337-9f9c-a14db807538c container client-container: 
STEP: delete the pod
Jan  6 14:55:16.007: INFO: Waiting for pod downwardapi-volume-4f59fd5f-eafb-4337-9f9c-a14db807538c to disappear
Jan  6 14:55:16.108: INFO: Pod downwardapi-volume-4f59fd5f-eafb-4337-9f9c-a14db807538c no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:55:16.109: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2943" for this suite.
Jan  6 14:55:22.138: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:55:22.281: INFO: namespace projected-2943 deletion completed in 6.165598584s

• [SLOW TEST:14.552 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:55:22.282: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-upd-a12ea5db-baf6-4e7b-ad93-dcfdeb67fac3
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:55:32.569: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3012" for this suite.
Jan  6 14:55:54.616: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:55:54.729: INFO: namespace configmap-3012 deletion completed in 22.146265915s

• [SLOW TEST:32.447 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:55:54.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name cm-test-opt-del-4fcf31bf-b1b0-470e-aadb-89beb94e7a6b
STEP: Creating configMap with name cm-test-opt-upd-d1e9845d-5b53-461a-abda-83aa26686c0b
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-4fcf31bf-b1b0-470e-aadb-89beb94e7a6b
STEP: Updating configmap cm-test-opt-upd-d1e9845d-5b53-461a-abda-83aa26686c0b
STEP: Creating configMap with name cm-test-opt-create-64715807-a1cc-4aeb-a874-6072a20478e2
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:57:16.681: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1162" for this suite.
Jan  6 14:57:38.736: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:57:38.845: INFO: namespace configmap-1162 deletion completed in 22.158974422s

• [SLOW TEST:104.116 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:57:38.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:39
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  6 14:57:39.018: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ed520f85-2e9c-4bd6-8044-753e4c20c058" in namespace "downward-api-7648" to be "success or failure"
Jan  6 14:57:39.025: INFO: Pod "downwardapi-volume-ed520f85-2e9c-4bd6-8044-753e4c20c058": Phase="Pending", Reason="", readiness=false. Elapsed: 7.013635ms
Jan  6 14:57:41.038: INFO: Pod "downwardapi-volume-ed520f85-2e9c-4bd6-8044-753e4c20c058": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020004345s
Jan  6 14:57:43.055: INFO: Pod "downwardapi-volume-ed520f85-2e9c-4bd6-8044-753e4c20c058": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036587409s
Jan  6 14:57:45.063: INFO: Pod "downwardapi-volume-ed520f85-2e9c-4bd6-8044-753e4c20c058": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044699188s
Jan  6 14:57:47.072: INFO: Pod "downwardapi-volume-ed520f85-2e9c-4bd6-8044-753e4c20c058": Phase="Running", Reason="", readiness=true. Elapsed: 8.054222451s
Jan  6 14:57:49.080: INFO: Pod "downwardapi-volume-ed520f85-2e9c-4bd6-8044-753e4c20c058": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062167947s
STEP: Saw pod success
Jan  6 14:57:49.080: INFO: Pod "downwardapi-volume-ed520f85-2e9c-4bd6-8044-753e4c20c058" satisfied condition "success or failure"
Jan  6 14:57:49.084: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-ed520f85-2e9c-4bd6-8044-753e4c20c058 container client-container: 
STEP: delete the pod
Jan  6 14:57:49.212: INFO: Waiting for pod downwardapi-volume-ed520f85-2e9c-4bd6-8044-753e4c20c058 to disappear
Jan  6 14:57:49.220: INFO: Pod downwardapi-volume-ed520f85-2e9c-4bd6-8044-753e4c20c058 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:57:49.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7648" for this suite.
Jan  6 14:57:55.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:57:55.436: INFO: namespace downward-api-7648 deletion completed in 6.179975855s

• [SLOW TEST:16.590 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:57:55.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:273
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating a replication controller
Jan  6 14:57:55.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4579'
Jan  6 14:57:55.894: INFO: stderr: ""
Jan  6 14:57:55.894: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan  6 14:57:55.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4579'
Jan  6 14:57:56.149: INFO: stderr: ""
Jan  6 14:57:56.150: INFO: stdout: "update-demo-nautilus-kcdnm update-demo-nautilus-pff7n "
Jan  6 14:57:56.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kcdnm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4579'
Jan  6 14:57:56.280: INFO: stderr: ""
Jan  6 14:57:56.280: INFO: stdout: ""
Jan  6 14:57:56.280: INFO: update-demo-nautilus-kcdnm is created but not running
Jan  6 14:58:01.281: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4579'
Jan  6 14:58:02.356: INFO: stderr: ""
Jan  6 14:58:02.356: INFO: stdout: "update-demo-nautilus-kcdnm update-demo-nautilus-pff7n "
Jan  6 14:58:02.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kcdnm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4579'
Jan  6 14:58:02.802: INFO: stderr: ""
Jan  6 14:58:02.802: INFO: stdout: ""
Jan  6 14:58:02.802: INFO: update-demo-nautilus-kcdnm is created but not running
Jan  6 14:58:07.802: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4579'
Jan  6 14:58:07.994: INFO: stderr: ""
Jan  6 14:58:07.994: INFO: stdout: "update-demo-nautilus-kcdnm update-demo-nautilus-pff7n "
Jan  6 14:58:07.994: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kcdnm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4579'
Jan  6 14:58:08.155: INFO: stderr: ""
Jan  6 14:58:08.155: INFO: stdout: "true"
Jan  6 14:58:08.155: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kcdnm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4579'
Jan  6 14:58:08.306: INFO: stderr: ""
Jan  6 14:58:08.306: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  6 14:58:08.306: INFO: validating pod update-demo-nautilus-kcdnm
Jan  6 14:58:08.330: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  6 14:58:08.330: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  6 14:58:08.330: INFO: update-demo-nautilus-kcdnm is verified up and running
Jan  6 14:58:08.330: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pff7n -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4579'
Jan  6 14:58:08.470: INFO: stderr: ""
Jan  6 14:58:08.470: INFO: stdout: "true"
Jan  6 14:58:08.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pff7n -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4579'
Jan  6 14:58:08.603: INFO: stderr: ""
Jan  6 14:58:08.603: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan  6 14:58:08.603: INFO: validating pod update-demo-nautilus-pff7n
Jan  6 14:58:08.624: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan  6 14:58:08.624: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan  6 14:58:08.624: INFO: update-demo-nautilus-pff7n is verified up and running
STEP: using delete to clean up resources
Jan  6 14:58:08.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4579'
Jan  6 14:58:08.735: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan  6 14:58:08.735: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan  6 14:58:08.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4579'
Jan  6 14:58:08.873: INFO: stderr: "No resources found.\n"
Jan  6 14:58:08.873: INFO: stdout: ""
Jan  6 14:58:08.873: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4579 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan  6 14:58:09.049: INFO: stderr: ""
Jan  6 14:58:09.049: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:58:09.049: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4579" for this suite.
Jan  6 14:58:31.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:58:31.215: INFO: namespace kubectl-4579 deletion completed in 22.151504404s

• [SLOW TEST:35.779 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:58:31.215: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:167
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating server pod server in namespace prestop-1233
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-1233
STEP: Deleting pre-stop pod
Jan  6 14:58:54.550: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:58:54.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-1233" for this suite.
Jan  6 14:59:38.602: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:59:38.724: INFO: namespace prestop-1233 deletion completed in 44.143114067s

• [SLOW TEST:67.508 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:59:38.725: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan  6 14:59:47.467: INFO: Successfully updated pod "pod-update-activedeadlineseconds-264bf71b-0a85-4352-94a0-41c7359231af"
Jan  6 14:59:47.467: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-264bf71b-0a85-4352-94a0-41c7359231af" in namespace "pods-8704" to be "terminated due to deadline exceeded"
Jan  6 14:59:47.496: INFO: Pod "pod-update-activedeadlineseconds-264bf71b-0a85-4352-94a0-41c7359231af": Phase="Running", Reason="", readiness=true. Elapsed: 29.308902ms
Jan  6 14:59:49.511: INFO: Pod "pod-update-activedeadlineseconds-264bf71b-0a85-4352-94a0-41c7359231af": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.044070929s
Jan  6 14:59:49.511: INFO: Pod "pod-update-activedeadlineseconds-264bf71b-0a85-4352-94a0-41c7359231af" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 14:59:49.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8704" for this suite.
Jan  6 14:59:55.553: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 14:59:55.741: INFO: namespace pods-8704 deletion completed in 6.215489385s

• [SLOW TEST:17.016 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 14:59:55.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan  6 14:59:56.684: INFO: Pod name wrapped-volume-race-77ba5ffb-697a-4211-9a37-eba8d50e2dee: Found 1 pods out of 5
Jan  6 15:00:01.701: INFO: Pod name wrapped-volume-race-77ba5ffb-697a-4211-9a37-eba8d50e2dee: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-77ba5ffb-697a-4211-9a37-eba8d50e2dee in namespace emptydir-wrapper-6295, will wait for the garbage collector to delete the pods
Jan  6 15:00:27.835: INFO: Deleting ReplicationController wrapped-volume-race-77ba5ffb-697a-4211-9a37-eba8d50e2dee took: 20.731595ms
Jan  6 15:00:28.135: INFO: Terminating ReplicationController wrapped-volume-race-77ba5ffb-697a-4211-9a37-eba8d50e2dee pods took: 300.529695ms
STEP: Creating RC which spawns configmap-volume pods
Jan  6 15:01:17.111: INFO: Pod name wrapped-volume-race-2f6aaa83-3539-46bf-bcb8-829c08fab6aa: Found 0 pods out of 5
Jan  6 15:01:22.201: INFO: Pod name wrapped-volume-race-2f6aaa83-3539-46bf-bcb8-829c08fab6aa: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-2f6aaa83-3539-46bf-bcb8-829c08fab6aa in namespace emptydir-wrapper-6295, will wait for the garbage collector to delete the pods
Jan  6 15:01:54.494: INFO: Deleting ReplicationController wrapped-volume-race-2f6aaa83-3539-46bf-bcb8-829c08fab6aa took: 16.034196ms
Jan  6 15:01:54.795: INFO: Terminating ReplicationController wrapped-volume-race-2f6aaa83-3539-46bf-bcb8-829c08fab6aa pods took: 300.618529ms
STEP: Creating RC which spawns configmap-volume pods
Jan  6 15:02:47.897: INFO: Pod name wrapped-volume-race-caf6a3bc-ecda-491e-b2ae-51691f7fbcb8: Found 0 pods out of 5
Jan  6 15:02:52.909: INFO: Pod name wrapped-volume-race-caf6a3bc-ecda-491e-b2ae-51691f7fbcb8: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-caf6a3bc-ecda-491e-b2ae-51691f7fbcb8 in namespace emptydir-wrapper-6295, will wait for the garbage collector to delete the pods
Jan  6 15:03:21.010: INFO: Deleting ReplicationController wrapped-volume-race-caf6a3bc-ecda-491e-b2ae-51691f7fbcb8 took: 13.0112ms
Jan  6 15:03:21.410: INFO: Terminating ReplicationController wrapped-volume-race-caf6a3bc-ecda-491e-b2ae-51691f7fbcb8 pods took: 400.564335ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:04:17.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-6295" for this suite.
Jan  6 15:04:27.522: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:04:27.671: INFO: namespace emptydir-wrapper-6295 deletion completed in 10.212657526s

• [SLOW TEST:271.930 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:04:27.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan  6 15:04:39.972: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:04:40.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8263" for this suite.
Jan  6 15:04:46.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:04:46.152: INFO: namespace container-runtime-8263 deletion completed in 6.136571943s

• [SLOW TEST:18.480 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:129
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:04:46.152: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating the pod
Jan  6 15:04:46.292: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:04:59.306: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-6660" for this suite.
Jan  6 15:05:05.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:05:05.528: INFO: namespace init-container-6660 deletion completed in 6.18790383s

• [SLOW TEST:19.376 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:05:05.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[BeforeEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1557
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: running the image docker.io/library/nginx:1.14-alpine
Jan  6 15:05:05.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-nginx-deployment --image=docker.io/library/nginx:1.14-alpine --generator=deployment/apps.v1 --namespace=kubectl-7878'
Jan  6 15:05:07.968: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan  6 15:05:07.968: INFO: stdout: "deployment.apps/e2e-test-nginx-deployment created\n"
STEP: verifying the deployment e2e-test-nginx-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-nginx-deployment was created
[AfterEach] [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1562
Jan  6 15:05:10.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-nginx-deployment --namespace=kubectl-7878'
Jan  6 15:05:10.296: INFO: stderr: ""
Jan  6 15:05:10.296: INFO: stdout: "deployment.extensions \"e2e-test-nginx-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:05:10.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7878" for this suite.
Jan  6 15:05:16.340: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:05:16.502: INFO: namespace kubectl-7878 deletion completed in 6.199334571s

• [SLOW TEST:10.973 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:05:16.502: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:05:24.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2894" for this suite.
Jan  6 15:06:08.711: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:06:08.855: INFO: namespace kubelet-test-2894 deletion completed in 44.170340226s

• [SLOW TEST:52.354 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:06:08.857: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:06:14.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-555" for this suite.
Jan  6 15:06:20.536: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:06:20.645: INFO: namespace watch-555 deletion completed in 6.196986608s

• [SLOW TEST:11.788 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:06:20.645: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating projection with secret that has name projected-secret-test-map-37c32833-5624-46ff-9104-adee74fb6f26
STEP: Creating a pod to test consume secrets
Jan  6 15:06:20.775: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-7e57947d-0835-47f2-9375-7a352477906e" in namespace "projected-2476" to be "success or failure"
Jan  6 15:06:20.792: INFO: Pod "pod-projected-secrets-7e57947d-0835-47f2-9375-7a352477906e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.637453ms
Jan  6 15:06:22.798: INFO: Pod "pod-projected-secrets-7e57947d-0835-47f2-9375-7a352477906e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022968469s
Jan  6 15:06:24.807: INFO: Pod "pod-projected-secrets-7e57947d-0835-47f2-9375-7a352477906e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031820057s
Jan  6 15:06:26.814: INFO: Pod "pod-projected-secrets-7e57947d-0835-47f2-9375-7a352477906e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03806625s
Jan  6 15:06:28.838: INFO: Pod "pod-projected-secrets-7e57947d-0835-47f2-9375-7a352477906e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.062413347s
STEP: Saw pod success
Jan  6 15:06:28.838: INFO: Pod "pod-projected-secrets-7e57947d-0835-47f2-9375-7a352477906e" satisfied condition "success or failure"
Jan  6 15:06:28.848: INFO: Trying to get logs from node iruya-node pod pod-projected-secrets-7e57947d-0835-47f2-9375-7a352477906e container projected-secret-volume-test: 
STEP: delete the pod
Jan  6 15:06:28.955: INFO: Waiting for pod pod-projected-secrets-7e57947d-0835-47f2-9375-7a352477906e to disappear
Jan  6 15:06:28.960: INFO: Pod pod-projected-secrets-7e57947d-0835-47f2-9375-7a352477906e no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:06:28.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2476" for this suite.
Jan  6 15:06:35.049: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:06:35.172: INFO: namespace projected-2476 deletion completed in 6.206166435s

• [SLOW TEST:14.527 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:06:35.173: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-map-869b5629-6444-4b12-881e-df380b5342df
STEP: Creating a pod to test consume configMaps
Jan  6 15:06:35.303: INFO: Waiting up to 5m0s for pod "pod-configmaps-d0d45933-b10c-4de1-b930-163dfe0b93a6" in namespace "configmap-6737" to be "success or failure"
Jan  6 15:06:35.315: INFO: Pod "pod-configmaps-d0d45933-b10c-4de1-b930-163dfe0b93a6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.409925ms
Jan  6 15:06:37.326: INFO: Pod "pod-configmaps-d0d45933-b10c-4de1-b930-163dfe0b93a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022910897s
Jan  6 15:06:39.334: INFO: Pod "pod-configmaps-d0d45933-b10c-4de1-b930-163dfe0b93a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030347162s
Jan  6 15:06:41.342: INFO: Pod "pod-configmaps-d0d45933-b10c-4de1-b930-163dfe0b93a6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038949172s
Jan  6 15:06:43.352: INFO: Pod "pod-configmaps-d0d45933-b10c-4de1-b930-163dfe0b93a6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048167487s
Jan  6 15:06:45.360: INFO: Pod "pod-configmaps-d0d45933-b10c-4de1-b930-163dfe0b93a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056220804s
STEP: Saw pod success
Jan  6 15:06:45.360: INFO: Pod "pod-configmaps-d0d45933-b10c-4de1-b930-163dfe0b93a6" satisfied condition "success or failure"
Jan  6 15:06:45.363: INFO: Trying to get logs from node iruya-node pod pod-configmaps-d0d45933-b10c-4de1-b930-163dfe0b93a6 container configmap-volume-test: 
STEP: delete the pod
Jan  6 15:06:45.426: INFO: Waiting for pod pod-configmaps-d0d45933-b10c-4de1-b930-163dfe0b93a6 to disappear
Jan  6 15:06:45.436: INFO: Pod pod-configmaps-d0d45933-b10c-4de1-b930-163dfe0b93a6 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:06:45.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6737" for this suite.
Jan  6 15:06:53.466: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:06:53.616: INFO: namespace configmap-6737 deletion completed in 8.174190705s

• [SLOW TEST:18.444 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:06:53.618: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-configmap-lgm4
STEP: Creating a pod to test atomic-volume-subpath
Jan  6 15:06:53.749: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-lgm4" in namespace "subpath-6930" to be "success or failure"
Jan  6 15:06:53.791: INFO: Pod "pod-subpath-test-configmap-lgm4": Phase="Pending", Reason="", readiness=false. Elapsed: 42.427572ms
Jan  6 15:06:55.805: INFO: Pod "pod-subpath-test-configmap-lgm4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056102728s
Jan  6 15:06:57.813: INFO: Pod "pod-subpath-test-configmap-lgm4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063635359s
Jan  6 15:06:59.826: INFO: Pod "pod-subpath-test-configmap-lgm4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076560743s
Jan  6 15:07:01.836: INFO: Pod "pod-subpath-test-configmap-lgm4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087478407s
Jan  6 15:07:03.850: INFO: Pod "pod-subpath-test-configmap-lgm4": Phase="Running", Reason="", readiness=true. Elapsed: 10.100589271s
Jan  6 15:07:05.865: INFO: Pod "pod-subpath-test-configmap-lgm4": Phase="Running", Reason="", readiness=true. Elapsed: 12.115665867s
Jan  6 15:07:07.879: INFO: Pod "pod-subpath-test-configmap-lgm4": Phase="Running", Reason="", readiness=true. Elapsed: 14.129532251s
Jan  6 15:07:09.893: INFO: Pod "pod-subpath-test-configmap-lgm4": Phase="Running", Reason="", readiness=true. Elapsed: 16.144463752s
Jan  6 15:07:11.913: INFO: Pod "pod-subpath-test-configmap-lgm4": Phase="Running", Reason="", readiness=true. Elapsed: 18.164312443s
Jan  6 15:07:13.929: INFO: Pod "pod-subpath-test-configmap-lgm4": Phase="Running", Reason="", readiness=true. Elapsed: 20.179696591s
Jan  6 15:07:15.940: INFO: Pod "pod-subpath-test-configmap-lgm4": Phase="Running", Reason="", readiness=true. Elapsed: 22.190907274s
Jan  6 15:07:17.947: INFO: Pod "pod-subpath-test-configmap-lgm4": Phase="Running", Reason="", readiness=true. Elapsed: 24.198137632s
Jan  6 15:07:19.958: INFO: Pod "pod-subpath-test-configmap-lgm4": Phase="Running", Reason="", readiness=true. Elapsed: 26.208602736s
Jan  6 15:07:21.968: INFO: Pod "pod-subpath-test-configmap-lgm4": Phase="Running", Reason="", readiness=true. Elapsed: 28.219020478s
Jan  6 15:07:23.983: INFO: Pod "pod-subpath-test-configmap-lgm4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.23387817s
STEP: Saw pod success
Jan  6 15:07:23.983: INFO: Pod "pod-subpath-test-configmap-lgm4" satisfied condition "success or failure"
Jan  6 15:07:23.987: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-configmap-lgm4 container test-container-subpath-configmap-lgm4: 
STEP: delete the pod
Jan  6 15:07:24.095: INFO: Waiting for pod pod-subpath-test-configmap-lgm4 to disappear
Jan  6 15:07:24.109: INFO: Pod pod-subpath-test-configmap-lgm4 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-lgm4
Jan  6 15:07:24.109: INFO: Deleting pod "pod-subpath-test-configmap-lgm4" in namespace "subpath-6930"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:07:24.112: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-6930" for this suite.
Jan  6 15:07:30.202: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:07:30.331: INFO: namespace subpath-6930 deletion completed in 6.21492466s

• [SLOW TEST:36.713 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:07:30.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod pod-subpath-test-projected-f65m
STEP: Creating a pod to test atomic-volume-subpath
Jan  6 15:07:30.465: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-f65m" in namespace "subpath-4602" to be "success or failure"
Jan  6 15:07:30.511: INFO: Pod "pod-subpath-test-projected-f65m": Phase="Pending", Reason="", readiness=false. Elapsed: 46.072448ms
Jan  6 15:07:32.527: INFO: Pod "pod-subpath-test-projected-f65m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.062013682s
Jan  6 15:07:34.569: INFO: Pod "pod-subpath-test-projected-f65m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.103777343s
Jan  6 15:07:36.579: INFO: Pod "pod-subpath-test-projected-f65m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113729407s
Jan  6 15:07:38.593: INFO: Pod "pod-subpath-test-projected-f65m": Phase="Running", Reason="", readiness=true. Elapsed: 8.128236457s
Jan  6 15:07:40.608: INFO: Pod "pod-subpath-test-projected-f65m": Phase="Running", Reason="", readiness=true. Elapsed: 10.143297119s
Jan  6 15:07:42.623: INFO: Pod "pod-subpath-test-projected-f65m": Phase="Running", Reason="", readiness=true. Elapsed: 12.158298651s
Jan  6 15:07:44.671: INFO: Pod "pod-subpath-test-projected-f65m": Phase="Running", Reason="", readiness=true. Elapsed: 14.206346938s
Jan  6 15:07:46.680: INFO: Pod "pod-subpath-test-projected-f65m": Phase="Running", Reason="", readiness=true. Elapsed: 16.215477802s
Jan  6 15:07:48.698: INFO: Pod "pod-subpath-test-projected-f65m": Phase="Running", Reason="", readiness=true. Elapsed: 18.232657127s
Jan  6 15:07:50.707: INFO: Pod "pod-subpath-test-projected-f65m": Phase="Running", Reason="", readiness=true. Elapsed: 20.242545092s
Jan  6 15:07:52.717: INFO: Pod "pod-subpath-test-projected-f65m": Phase="Running", Reason="", readiness=true. Elapsed: 22.252441755s
Jan  6 15:07:54.724: INFO: Pod "pod-subpath-test-projected-f65m": Phase="Running", Reason="", readiness=true. Elapsed: 24.25903135s
Jan  6 15:07:56.738: INFO: Pod "pod-subpath-test-projected-f65m": Phase="Running", Reason="", readiness=true. Elapsed: 26.273084954s
Jan  6 15:07:58.895: INFO: Pod "pod-subpath-test-projected-f65m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.430333494s
STEP: Saw pod success
Jan  6 15:07:58.895: INFO: Pod "pod-subpath-test-projected-f65m" satisfied condition "success or failure"
Jan  6 15:07:58.905: INFO: Trying to get logs from node iruya-node pod pod-subpath-test-projected-f65m container test-container-subpath-projected-f65m: 
STEP: delete the pod
Jan  6 15:07:59.083: INFO: Waiting for pod pod-subpath-test-projected-f65m to disappear
Jan  6 15:07:59.096: INFO: Pod pod-subpath-test-projected-f65m no longer exists
STEP: Deleting pod pod-subpath-test-projected-f65m
Jan  6 15:07:59.096: INFO: Deleting pod "pod-subpath-test-projected-f65m" in namespace "subpath-4602"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:07:59.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-4602" for this suite.
Jan  6 15:08:05.139: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:08:05.334: INFO: namespace subpath-4602 deletion completed in 6.228820502s

• [SLOW TEST:35.002 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:22
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:08:05.334: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating configMap with name configmap-test-volume-928487d7-0304-45a1-bd5c-25dad86757b5
STEP: Creating a pod to test consume configMaps
Jan  6 15:08:05.442: INFO: Waiting up to 5m0s for pod "pod-configmaps-e1da3e96-1c2f-459c-8c4f-58b5706c00ad" in namespace "configmap-5515" to be "success or failure"
Jan  6 15:08:05.513: INFO: Pod "pod-configmaps-e1da3e96-1c2f-459c-8c4f-58b5706c00ad": Phase="Pending", Reason="", readiness=false. Elapsed: 70.744756ms
Jan  6 15:08:07.529: INFO: Pod "pod-configmaps-e1da3e96-1c2f-459c-8c4f-58b5706c00ad": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0867213s
Jan  6 15:08:09.538: INFO: Pod "pod-configmaps-e1da3e96-1c2f-459c-8c4f-58b5706c00ad": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09533021s
Jan  6 15:08:11.547: INFO: Pod "pod-configmaps-e1da3e96-1c2f-459c-8c4f-58b5706c00ad": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104186362s
Jan  6 15:08:13.560: INFO: Pod "pod-configmaps-e1da3e96-1c2f-459c-8c4f-58b5706c00ad": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.117728103s
STEP: Saw pod success
Jan  6 15:08:13.560: INFO: Pod "pod-configmaps-e1da3e96-1c2f-459c-8c4f-58b5706c00ad" satisfied condition "success or failure"
Jan  6 15:08:13.570: INFO: Trying to get logs from node iruya-node pod pod-configmaps-e1da3e96-1c2f-459c-8c4f-58b5706c00ad container configmap-volume-test: 
STEP: delete the pod
Jan  6 15:08:13.623: INFO: Waiting for pod pod-configmaps-e1da3e96-1c2f-459c-8c4f-58b5706c00ad to disappear
Jan  6 15:08:13.675: INFO: Pod pod-configmaps-e1da3e96-1c2f-459c-8c4f-58b5706c00ad no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:08:13.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5515" for this suite.
Jan  6 15:08:19.785: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:08:19.922: INFO: namespace configmap-5515 deletion completed in 6.23678881s

• [SLOW TEST:14.588 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:32
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:08:19.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:39
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test downward API volume plugin
Jan  6 15:08:20.072: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8080b7ce-0737-4ffb-93e1-86fc37e47cc6" in namespace "projected-5614" to be "success or failure"
Jan  6 15:08:20.101: INFO: Pod "downwardapi-volume-8080b7ce-0737-4ffb-93e1-86fc37e47cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 28.920758ms
Jan  6 15:08:22.112: INFO: Pod "downwardapi-volume-8080b7ce-0737-4ffb-93e1-86fc37e47cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039865921s
Jan  6 15:08:24.128: INFO: Pod "downwardapi-volume-8080b7ce-0737-4ffb-93e1-86fc37e47cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055825087s
Jan  6 15:08:26.140: INFO: Pod "downwardapi-volume-8080b7ce-0737-4ffb-93e1-86fc37e47cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.067833821s
Jan  6 15:08:28.152: INFO: Pod "downwardapi-volume-8080b7ce-0737-4ffb-93e1-86fc37e47cc6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079455464s
Jan  6 15:08:30.161: INFO: Pod "downwardapi-volume-8080b7ce-0737-4ffb-93e1-86fc37e47cc6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.088759217s
STEP: Saw pod success
Jan  6 15:08:30.161: INFO: Pod "downwardapi-volume-8080b7ce-0737-4ffb-93e1-86fc37e47cc6" satisfied condition "success or failure"
Jan  6 15:08:30.166: INFO: Trying to get logs from node iruya-node pod downwardapi-volume-8080b7ce-0737-4ffb-93e1-86fc37e47cc6 container client-container: 
STEP: delete the pod
Jan  6 15:08:30.240: INFO: Waiting for pod downwardapi-volume-8080b7ce-0737-4ffb-93e1-86fc37e47cc6 to disappear
Jan  6 15:08:30.277: INFO: Pod downwardapi-volume-8080b7ce-0737-4ffb-93e1-86fc37e47cc6 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:08:30.277: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5614" for this suite.
Jan  6 15:08:36.378: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:08:36.587: INFO: namespace projected-5614 deletion completed in 6.302840706s

• [SLOW TEST:16.664 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:33
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:08:36.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7182.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7182.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan  6 15:08:46.770: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-7182/dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c: the server could not find the requested resource (get pods dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c)
Jan  6 15:08:46.776: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-7182/dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c: the server could not find the requested resource (get pods dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c)
Jan  6 15:08:46.782: INFO: Unable to read wheezy_udp@PodARecord from pod dns-7182/dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c: the server could not find the requested resource (get pods dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c)
Jan  6 15:08:46.786: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-7182/dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c: the server could not find the requested resource (get pods dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c)
Jan  6 15:08:46.791: INFO: Unable to read jessie_udp@kubernetes.default.svc.cluster.local from pod dns-7182/dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c: the server could not find the requested resource (get pods dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c)
Jan  6 15:08:46.795: INFO: Unable to read jessie_tcp@kubernetes.default.svc.cluster.local from pod dns-7182/dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c: the server could not find the requested resource (get pods dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c)
Jan  6 15:08:46.800: INFO: Unable to read jessie_udp@PodARecord from pod dns-7182/dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c: the server could not find the requested resource (get pods dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c)
Jan  6 15:08:46.804: INFO: Unable to read jessie_tcp@PodARecord from pod dns-7182/dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c: the server could not find the requested resource (get pods dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c)
Jan  6 15:08:46.804: INFO: Lookups using dns-7182/dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@kubernetes.default.svc.cluster.local jessie_tcp@kubernetes.default.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Jan  6 15:08:51.892: INFO: DNS probes using dns-7182/dns-test-ea5c8051-9045-4c9c-94d1-fab039a7fe0c succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:08:51.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7182" for this suite.
Jan  6 15:08:58.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:08:58.157: INFO: namespace dns-7182 deletion completed in 6.148897217s

• [SLOW TEST:21.569 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:08:58.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan  6 15:08:58.290: INFO: Waiting up to 5m0s for pod "pod-6544796b-f8a6-45cf-ba82-3ec261b4b94e" in namespace "emptydir-5059" to be "success or failure"
Jan  6 15:08:58.316: INFO: Pod "pod-6544796b-f8a6-45cf-ba82-3ec261b4b94e": Phase="Pending", Reason="", readiness=false. Elapsed: 25.828211ms
Jan  6 15:09:00.326: INFO: Pod "pod-6544796b-f8a6-45cf-ba82-3ec261b4b94e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035811674s
Jan  6 15:09:02.335: INFO: Pod "pod-6544796b-f8a6-45cf-ba82-3ec261b4b94e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044652232s
Jan  6 15:09:04.344: INFO: Pod "pod-6544796b-f8a6-45cf-ba82-3ec261b4b94e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053554764s
Jan  6 15:09:06.352: INFO: Pod "pod-6544796b-f8a6-45cf-ba82-3ec261b4b94e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061242392s
STEP: Saw pod success
Jan  6 15:09:06.352: INFO: Pod "pod-6544796b-f8a6-45cf-ba82-3ec261b4b94e" satisfied condition "success or failure"
Jan  6 15:09:06.358: INFO: Trying to get logs from node iruya-node pod pod-6544796b-f8a6-45cf-ba82-3ec261b4b94e container test-container: 
STEP: delete the pod
Jan  6 15:09:06.527: INFO: Waiting for pod pod-6544796b-f8a6-45cf-ba82-3ec261b4b94e to disappear
Jan  6 15:09:06.539: INFO: Pod pod-6544796b-f8a6-45cf-ba82-3ec261b4b94e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:09:06.539: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5059" for this suite.
Jan  6 15:09:12.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:09:12.705: INFO: namespace emptydir-5059 deletion completed in 6.143392894s

• [SLOW TEST:14.548 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:09:12.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test substitution in container's args
Jan  6 15:09:12.816: INFO: Waiting up to 5m0s for pod "var-expansion-7714c712-9ea0-4074-b926-3064100ef5f5" in namespace "var-expansion-7213" to be "success or failure"
Jan  6 15:09:12.819: INFO: Pod "var-expansion-7714c712-9ea0-4074-b926-3064100ef5f5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.296471ms
Jan  6 15:09:14.846: INFO: Pod "var-expansion-7714c712-9ea0-4074-b926-3064100ef5f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030358124s
Jan  6 15:09:16.914: INFO: Pod "var-expansion-7714c712-9ea0-4074-b926-3064100ef5f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.097786999s
Jan  6 15:09:18.924: INFO: Pod "var-expansion-7714c712-9ea0-4074-b926-3064100ef5f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.107918323s
Jan  6 15:09:20.932: INFO: Pod "var-expansion-7714c712-9ea0-4074-b926-3064100ef5f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116290608s
STEP: Saw pod success
Jan  6 15:09:20.932: INFO: Pod "var-expansion-7714c712-9ea0-4074-b926-3064100ef5f5" satisfied condition "success or failure"
Jan  6 15:09:20.937: INFO: Trying to get logs from node iruya-node pod var-expansion-7714c712-9ea0-4074-b926-3064100ef5f5 container dapi-container: 
STEP: delete the pod
Jan  6 15:09:20.982: INFO: Waiting for pod var-expansion-7714c712-9ea0-4074-b926-3064100ef5f5 to disappear
Jan  6 15:09:21.082: INFO: Pod var-expansion-7714c712-9ea0-4074-b926-3064100ef5f5 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:09:21.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7213" for this suite.
Jan  6 15:09:27.155: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:09:27.232: INFO: namespace var-expansion-7213 deletion completed in 6.143459469s

• [SLOW TEST:14.527 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:09:27.233: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:103
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  6 15:09:27.374: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan  6 15:09:27.400: INFO: Number of nodes with available pods: 0
Jan  6 15:09:27.400: INFO: Node iruya-node is running more than one daemon pod
Jan  6 15:09:28.699: INFO: Number of nodes with available pods: 0
Jan  6 15:09:28.699: INFO: Node iruya-node is running more than one daemon pod
Jan  6 15:09:29.419: INFO: Number of nodes with available pods: 0
Jan  6 15:09:29.419: INFO: Node iruya-node is running more than one daemon pod
Jan  6 15:09:30.440: INFO: Number of nodes with available pods: 0
Jan  6 15:09:30.440: INFO: Node iruya-node is running more than one daemon pod
Jan  6 15:09:31.417: INFO: Number of nodes with available pods: 0
Jan  6 15:09:31.417: INFO: Node iruya-node is running more than one daemon pod
Jan  6 15:09:32.988: INFO: Number of nodes with available pods: 0
Jan  6 15:09:32.988: INFO: Node iruya-node is running more than one daemon pod
Jan  6 15:09:33.535: INFO: Number of nodes with available pods: 0
Jan  6 15:09:33.535: INFO: Node iruya-node is running more than one daemon pod
Jan  6 15:09:34.417: INFO: Number of nodes with available pods: 0
Jan  6 15:09:34.417: INFO: Node iruya-node is running more than one daemon pod
Jan  6 15:09:35.409: INFO: Number of nodes with available pods: 0
Jan  6 15:09:35.409: INFO: Node iruya-node is running more than one daemon pod
Jan  6 15:09:36.414: INFO: Number of nodes with available pods: 1
Jan  6 15:09:36.414: INFO: Node iruya-node is running more than one daemon pod
Jan  6 15:09:37.415: INFO: Number of nodes with available pods: 2
Jan  6 15:09:37.415: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan  6 15:09:37.495: INFO: Wrong image for pod: daemon-set-5jm94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:37.495: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:38.548: INFO: Wrong image for pod: daemon-set-5jm94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:38.549: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:39.538: INFO: Wrong image for pod: daemon-set-5jm94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:39.538: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:40.543: INFO: Wrong image for pod: daemon-set-5jm94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:40.543: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:41.538: INFO: Wrong image for pod: daemon-set-5jm94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:41.538: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:42.546: INFO: Wrong image for pod: daemon-set-5jm94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:42.546: INFO: Pod daemon-set-5jm94 is not available
Jan  6 15:09:42.546: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:43.538: INFO: Wrong image for pod: daemon-set-5jm94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:43.538: INFO: Pod daemon-set-5jm94 is not available
Jan  6 15:09:43.538: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:44.543: INFO: Wrong image for pod: daemon-set-5jm94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:44.543: INFO: Pod daemon-set-5jm94 is not available
Jan  6 15:09:44.543: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:45.538: INFO: Wrong image for pod: daemon-set-5jm94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:45.539: INFO: Pod daemon-set-5jm94 is not available
Jan  6 15:09:45.539: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:46.595: INFO: Wrong image for pod: daemon-set-5jm94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:46.595: INFO: Pod daemon-set-5jm94 is not available
Jan  6 15:09:46.595: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:47.541: INFO: Wrong image for pod: daemon-set-5jm94. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:47.541: INFO: Pod daemon-set-5jm94 is not available
Jan  6 15:09:47.542: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:48.548: INFO: Pod daemon-set-27ddj is not available
Jan  6 15:09:48.548: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:49.539: INFO: Pod daemon-set-27ddj is not available
Jan  6 15:09:49.539: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:50.543: INFO: Pod daemon-set-27ddj is not available
Jan  6 15:09:50.543: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:51.547: INFO: Pod daemon-set-27ddj is not available
Jan  6 15:09:51.548: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:52.750: INFO: Pod daemon-set-27ddj is not available
Jan  6 15:09:52.750: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:53.739: INFO: Pod daemon-set-27ddj is not available
Jan  6 15:09:53.739: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:54.548: INFO: Pod daemon-set-27ddj is not available
Jan  6 15:09:54.548: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:55.702: INFO: Pod daemon-set-27ddj is not available
Jan  6 15:09:55.702: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:56.540: INFO: Pod daemon-set-27ddj is not available
Jan  6 15:09:56.540: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:57.540: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:58.549: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:09:59.543: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:10:00.547: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:10:01.580: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:10:01.580: INFO: Pod daemon-set-hhhrh is not available
Jan  6 15:10:02.543: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:10:02.543: INFO: Pod daemon-set-hhhrh is not available
Jan  6 15:10:03.537: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:10:03.537: INFO: Pod daemon-set-hhhrh is not available
Jan  6 15:10:04.557: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:10:04.557: INFO: Pod daemon-set-hhhrh is not available
Jan  6 15:10:05.537: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:10:05.537: INFO: Pod daemon-set-hhhrh is not available
Jan  6 15:10:06.545: INFO: Wrong image for pod: daemon-set-hhhrh. Expected: gcr.io/kubernetes-e2e-test-images/redis:1.0, got: docker.io/library/nginx:1.14-alpine.
Jan  6 15:10:06.545: INFO: Pod daemon-set-hhhrh is not available
Jan  6 15:10:07.539: INFO: Pod daemon-set-fz29j is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan  6 15:10:07.576: INFO: Number of nodes with available pods: 1
Jan  6 15:10:07.576: INFO: Node iruya-node is running more than one daemon pod
Jan  6 15:10:08.604: INFO: Number of nodes with available pods: 1
Jan  6 15:10:08.604: INFO: Node iruya-node is running more than one daemon pod
Jan  6 15:10:09.606: INFO: Number of nodes with available pods: 1
Jan  6 15:10:09.606: INFO: Node iruya-node is running more than one daemon pod
Jan  6 15:10:10.599: INFO: Number of nodes with available pods: 1
Jan  6 15:10:10.599: INFO: Node iruya-node is running more than one daemon pod
Jan  6 15:10:11.594: INFO: Number of nodes with available pods: 1
Jan  6 15:10:11.594: INFO: Node iruya-node is running more than one daemon pod
Jan  6 15:10:12.632: INFO: Number of nodes with available pods: 1
Jan  6 15:10:12.632: INFO: Node iruya-node is running more than one daemon pod
Jan  6 15:10:13.609: INFO: Number of nodes with available pods: 2
Jan  6 15:10:13.609: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:69
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-8547, will wait for the garbage collector to delete the pods
Jan  6 15:10:13.723: INFO: Deleting DaemonSet.extensions daemon-set took: 23.341464ms
Jan  6 15:10:14.024: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.732167ms
Jan  6 15:10:27.931: INFO: Number of nodes with available pods: 0
Jan  6 15:10:27.931: INFO: Number of running nodes: 0, number of available pods: 0
Jan  6 15:10:27.936: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-8547/daemonsets","resourceVersion":"19541055"},"items":null}

Jan  6 15:10:27.941: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-8547/pods","resourceVersion":"19541055"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:10:27.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-8547" for this suite.
Jan  6 15:10:34.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:10:34.119: INFO: namespace daemonsets-8547 deletion completed in 6.147467144s

• [SLOW TEST:66.886 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:10:34.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: creating Redis RC
Jan  6 15:10:34.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3143'
Jan  6 15:10:34.608: INFO: stderr: ""
Jan  6 15:10:34.608: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Jan  6 15:10:35.843: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 15:10:35.844: INFO: Found 0 / 1
Jan  6 15:10:36.620: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 15:10:36.620: INFO: Found 0 / 1
Jan  6 15:10:37.616: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 15:10:37.616: INFO: Found 0 / 1
Jan  6 15:10:38.621: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 15:10:38.621: INFO: Found 0 / 1
Jan  6 15:10:39.617: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 15:10:39.617: INFO: Found 0 / 1
Jan  6 15:10:40.623: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 15:10:40.623: INFO: Found 0 / 1
Jan  6 15:10:41.617: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 15:10:41.617: INFO: Found 0 / 1
Jan  6 15:10:42.628: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 15:10:42.628: INFO: Found 0 / 1
Jan  6 15:10:43.620: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 15:10:43.621: INFO: Found 1 / 1
Jan  6 15:10:43.621: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan  6 15:10:43.626: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 15:10:43.626: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan  6 15:10:43.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-h5jh8 --namespace=kubectl-3143 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan  6 15:10:43.809: INFO: stderr: ""
Jan  6 15:10:43.809: INFO: stdout: "pod/redis-master-h5jh8 patched\n"
STEP: checking annotations
Jan  6 15:10:43.818: INFO: Selector matched 1 pods for map[app:redis]
Jan  6 15:10:43.818: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:10:43.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3143" for this suite.
Jan  6 15:11:05.846: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:11:06.042: INFO: namespace kubectl-3143 deletion completed in 22.217931056s

• [SLOW TEST:31.923 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client [k8s.io] Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:11:06.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:221
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  6 15:11:06.140: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan  6 15:11:06.392: INFO: stderr: ""
Jan  6 15:11:06.392: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.7\", GitCommit:\"6c143d35bb11d74970e7bc0b6c45b6bfdffc0bd4\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T16:55:20Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"15\", GitVersion:\"v1.15.1\", GitCommit:\"4485c6f18cee9a5d3c3b4e523bd27972b1b53892\", GitTreeState:\"clean\", BuildDate:\"2019-07-18T09:09:21Z\", GoVersion:\"go1.12.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:11:06.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5443" for this suite.
Jan  6 15:11:12.431: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:11:12.541: INFO: namespace kubectl-5443 deletion completed in 6.134341351s

• [SLOW TEST:6.499 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  [k8s.io] Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:11:12.542: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:63
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan  6 15:11:26.791: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  6 15:11:26.864: INFO: Pod pod-with-poststart-http-hook still exists
Jan  6 15:11:28.864: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  6 15:11:28.876: INFO: Pod pod-with-poststart-http-hook still exists
Jan  6 15:11:30.865: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  6 15:11:30.879: INFO: Pod pod-with-poststart-http-hook still exists
Jan  6 15:11:32.865: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  6 15:11:32.885: INFO: Pod pod-with-poststart-http-hook still exists
Jan  6 15:11:34.864: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  6 15:11:34.877: INFO: Pod pod-with-poststart-http-hook still exists
Jan  6 15:11:36.864: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan  6 15:11:36.881: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:11:36.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1880" for this suite.
Jan  6 15:11:58.923: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:11:59.091: INFO: namespace container-lifecycle-hook-1880 deletion completed in 22.199028537s

• [SLOW TEST:46.549 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:11:59.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:164
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
Jan  6 15:11:59.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:12:09.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1296" for this suite.
Jan  6 15:12:51.348: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:12:51.477: INFO: namespace pods-1296 deletion completed in 42.161485937s

• [SLOW TEST:52.386 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:12:51.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan  6 15:12:51.577: INFO: Waiting up to 5m0s for pod "pod-65a0ee90-2842-4237-997d-b762e31f5c55" in namespace "emptydir-8970" to be "success or failure"
Jan  6 15:12:51.588: INFO: Pod "pod-65a0ee90-2842-4237-997d-b762e31f5c55": Phase="Pending", Reason="", readiness=false. Elapsed: 10.783488ms
Jan  6 15:12:53.602: INFO: Pod "pod-65a0ee90-2842-4237-997d-b762e31f5c55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024820354s
Jan  6 15:12:55.611: INFO: Pod "pod-65a0ee90-2842-4237-997d-b762e31f5c55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034066398s
Jan  6 15:12:57.656: INFO: Pod "pod-65a0ee90-2842-4237-997d-b762e31f5c55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.079000923s
Jan  6 15:12:59.665: INFO: Pod "pod-65a0ee90-2842-4237-997d-b762e31f5c55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.088005841s
STEP: Saw pod success
Jan  6 15:12:59.665: INFO: Pod "pod-65a0ee90-2842-4237-997d-b762e31f5c55" satisfied condition "success or failure"
Jan  6 15:12:59.668: INFO: Trying to get logs from node iruya-node pod pod-65a0ee90-2842-4237-997d-b762e31f5c55 container test-container: 
STEP: delete the pod
Jan  6 15:12:59.722: INFO: Waiting for pod pod-65a0ee90-2842-4237-997d-b762e31f5c55 to disappear
Jan  6 15:12:59.732: INFO: Pod pod-65a0ee90-2842-4237-997d-b762e31f5c55 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:12:59.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8970" for this suite.
Jan  6 15:13:05.775: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:13:05.945: INFO: namespace emptydir-8970 deletion completed in 6.206848796s

• [SLOW TEST:14.467 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:13:05.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating pod liveness-324f5429-d0c0-4239-9aea-5b3c0db512f6 in namespace container-probe-7268
Jan  6 15:13:14.072: INFO: Started pod liveness-324f5429-d0c0-4239-9aea-5b3c0db512f6 in namespace container-probe-7268
STEP: checking the pod's current state and verifying that restartCount is present
Jan  6 15:13:14.076: INFO: Initial restart count of pod liveness-324f5429-d0c0-4239-9aea-5b3c0db512f6 is 0
Jan  6 15:13:36.183: INFO: Restart count of pod container-probe-7268/liveness-324f5429-d0c0-4239-9aea-5b3c0db512f6 is now 1 (22.107265597s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:13:36.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7268" for this suite.
Jan  6 15:13:42.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:13:42.371: INFO: namespace container-probe-7268 deletion completed in 6.136126414s

• [SLOW TEST:36.426 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:692
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:13:42.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Creating secret with name secret-test-c5156034-7462-4f33-98cd-9777ef072f0c
STEP: Creating a pod to test consume secrets
Jan  6 15:13:42.502: INFO: Waiting up to 5m0s for pod "pod-secrets-263c73d8-cf38-4185-9c3e-ca968e338c44" in namespace "secrets-5864" to be "success or failure"
Jan  6 15:13:42.511: INFO: Pod "pod-secrets-263c73d8-cf38-4185-9c3e-ca968e338c44": Phase="Pending", Reason="", readiness=false. Elapsed: 8.782337ms
Jan  6 15:13:44.523: INFO: Pod "pod-secrets-263c73d8-cf38-4185-9c3e-ca968e338c44": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020067158s
Jan  6 15:13:46.541: INFO: Pod "pod-secrets-263c73d8-cf38-4185-9c3e-ca968e338c44": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038321648s
Jan  6 15:13:48.557: INFO: Pod "pod-secrets-263c73d8-cf38-4185-9c3e-ca968e338c44": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054760141s
Jan  6 15:13:50.574: INFO: Pod "pod-secrets-263c73d8-cf38-4185-9c3e-ca968e338c44": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0710351s
STEP: Saw pod success
Jan  6 15:13:50.574: INFO: Pod "pod-secrets-263c73d8-cf38-4185-9c3e-ca968e338c44" satisfied condition "success or failure"
Jan  6 15:13:50.578: INFO: Trying to get logs from node iruya-node pod pod-secrets-263c73d8-cf38-4185-9c3e-ca968e338c44 container secret-env-test: 
STEP: delete the pod
Jan  6 15:13:50.774: INFO: Waiting for pod pod-secrets-263c73d8-cf38-4185-9c3e-ca968e338c44 to disappear
Jan  6 15:13:50.780: INFO: Pod pod-secrets-263c73d8-cf38-4185-9c3e-ca968e338c44 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:13:50.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5864" for this suite.
Jan  6 15:13:56.797: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:13:56.945: INFO: namespace secrets-5864 deletion completed in 6.161354747s

• [SLOW TEST:14.573 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:150
STEP: Creating a kubernetes client
Jan  6 15:13:56.945: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
STEP: Performing setup for networking test in namespace pod-network-test-8582
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan  6 15:13:57.098: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan  6 15:14:31.317: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8582 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 15:14:31.317: INFO: >>> kubeConfig: /root/.kube/config
I0106 15:14:31.397197       8 log.go:172] (0xc000dda420) (0xc002136c80) Create stream
I0106 15:14:31.397296       8 log.go:172] (0xc000dda420) (0xc002136c80) Stream added, broadcasting: 1
I0106 15:14:31.405942       8 log.go:172] (0xc000dda420) Reply frame received for 1
I0106 15:14:31.406021       8 log.go:172] (0xc000dda420) (0xc002136e60) Create stream
I0106 15:14:31.406037       8 log.go:172] (0xc000dda420) (0xc002136e60) Stream added, broadcasting: 3
I0106 15:14:31.408423       8 log.go:172] (0xc000dda420) Reply frame received for 3
I0106 15:14:31.408459       8 log.go:172] (0xc000dda420) (0xc00186af00) Create stream
I0106 15:14:31.408471       8 log.go:172] (0xc000dda420) (0xc00186af00) Stream added, broadcasting: 5
I0106 15:14:31.410676       8 log.go:172] (0xc000dda420) Reply frame received for 5
I0106 15:14:31.679843       8 log.go:172] (0xc000dda420) Data frame received for 3
I0106 15:14:31.679882       8 log.go:172] (0xc002136e60) (3) Data frame handling
I0106 15:14:31.679903       8 log.go:172] (0xc002136e60) (3) Data frame sent
I0106 15:14:31.815148       8 log.go:172] (0xc000dda420) (0xc002136e60) Stream removed, broadcasting: 3
I0106 15:14:31.815413       8 log.go:172] (0xc000dda420) Data frame received for 1
I0106 15:14:31.815436       8 log.go:172] (0xc002136c80) (1) Data frame handling
I0106 15:14:31.815458       8 log.go:172] (0xc002136c80) (1) Data frame sent
I0106 15:14:31.815560       8 log.go:172] (0xc000dda420) (0xc002136c80) Stream removed, broadcasting: 1
I0106 15:14:31.815731       8 log.go:172] (0xc000dda420) (0xc00186af00) Stream removed, broadcasting: 5
I0106 15:14:31.815908       8 log.go:172] (0xc000dda420) Go away received
I0106 15:14:31.816048       8 log.go:172] (0xc000dda420) (0xc002136c80) Stream removed, broadcasting: 1
I0106 15:14:31.816085       8 log.go:172] (0xc000dda420) (0xc002136e60) Stream removed, broadcasting: 3
I0106 15:14:31.816101       8 log.go:172] (0xc000dda420) (0xc00186af00) Stream removed, broadcasting: 5
Jan  6 15:14:31.816: INFO: Found all expected endpoints: [netserver-0]
Jan  6 15:14:31.828: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-8582 PodName:host-test-container-pod ContainerName:hostexec Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan  6 15:14:31.828: INFO: >>> kubeConfig: /root/.kube/config
I0106 15:14:31.893890       8 log.go:172] (0xc000ddb1e0) (0xc002137540) Create stream
I0106 15:14:31.893962       8 log.go:172] (0xc000ddb1e0) (0xc002137540) Stream added, broadcasting: 1
I0106 15:14:31.902294       8 log.go:172] (0xc000ddb1e0) Reply frame received for 1
I0106 15:14:31.902335       8 log.go:172] (0xc000ddb1e0) (0xc000524000) Create stream
I0106 15:14:31.902345       8 log.go:172] (0xc000ddb1e0) (0xc000524000) Stream added, broadcasting: 3
I0106 15:14:31.904280       8 log.go:172] (0xc000ddb1e0) Reply frame received for 3
I0106 15:14:31.904305       8 log.go:172] (0xc000ddb1e0) (0xc002a21220) Create stream
I0106 15:14:31.904316       8 log.go:172] (0xc000ddb1e0) (0xc002a21220) Stream added, broadcasting: 5
I0106 15:14:31.905854       8 log.go:172] (0xc000ddb1e0) Reply frame received for 5
I0106 15:14:32.037962       8 log.go:172] (0xc000ddb1e0) Data frame received for 3
I0106 15:14:32.038018       8 log.go:172] (0xc000524000) (3) Data frame handling
I0106 15:14:32.038034       8 log.go:172] (0xc000524000) (3) Data frame sent
I0106 15:14:32.233353       8 log.go:172] (0xc000ddb1e0) (0xc000524000) Stream removed, broadcasting: 3
I0106 15:14:32.233798       8 log.go:172] (0xc000ddb1e0) Data frame received for 1
I0106 15:14:32.233857       8 log.go:172] (0xc002137540) (1) Data frame handling
I0106 15:14:32.233874       8 log.go:172] (0xc002137540) (1) Data frame sent
I0106 15:14:32.234085       8 log.go:172] (0xc000ddb1e0) (0xc002137540) Stream removed, broadcasting: 1
I0106 15:14:32.234319       8 log.go:172] (0xc000ddb1e0) (0xc002a21220) Stream removed, broadcasting: 5
I0106 15:14:32.234352       8 log.go:172] (0xc000ddb1e0) (0xc002137540) Stream removed, broadcasting: 1
I0106 15:14:32.234363       8 log.go:172] (0xc000ddb1e0) (0xc000524000) Stream removed, broadcasting: 3
I0106 15:14:32.234370       8 log.go:172] (0xc000ddb1e0) (0xc002a21220) Stream removed, broadcasting: 5
I0106 15:14:32.234632       8 log.go:172] (0xc000ddb1e0) Go away received
Jan  6 15:14:32.235: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
Jan  6 15:14:32.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-8582" for this suite.
Jan  6 15:14:56.335: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Jan  6 15:14:56.505: INFO: namespace pod-network-test-8582 deletion completed in 24.26077612s

• [SLOW TEST:59.560 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:697
------------------------------
SSSSSSSSJan  6 15:14:56.506: INFO: Running AfterSuite actions on all nodes
Jan  6 15:14:56.506: INFO: Running AfterSuite actions on node 1
Jan  6 15:14:56.506: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:769

Ran 215 of 4412 Specs in 8306.960 seconds
FAIL! -- 214 Passed | 1 Failed | 0 Pending | 4197 Skipped
--- FAIL: TestE2E (8307.30s)
FAIL