I0213 23:39:07.556751 9 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0213 23:39:07.558402 9 e2e.go:109] Starting e2e run "dcf9342b-1028-40e0-b4aa-5f90211f3e72" on Ginkgo node 1 {"msg":"Test Suite starting","total":280,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1581637145 - Will randomize all specs Will run 280 of 4845 specs Feb 13 23:39:07.697: INFO: >>> kubeConfig: /root/.kube/config Feb 13 23:39:07.702: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Feb 13 23:39:07.729: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Feb 13 23:39:07.767: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Feb 13 23:39:07.767: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Feb 13 23:39:07.767: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Feb 13 23:39:07.780: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Feb 13 23:39:07.780: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Feb 13 23:39:07.780: INFO: e2e test version: v1.18.0-alpha.2.152+426b3538900329 Feb 13 23:39:07.782: INFO: kube-apiserver version: v1.17.0 Feb 13 23:39:07.782: INFO: >>> kubeConfig: /root/.kube/config Feb 13 23:39:07.827: INFO: Cluster IP family: ipv4 SSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:39:07.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected Feb 13 23:39:08.053: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 13 23:39:08.073: INFO: Waiting up to 5m0s for pod "downwardapi-volume-742a4ce0-35e2-40e7-918a-824d450fb87e" in namespace "projected-4333" to be "success or failure" Feb 13 23:39:08.140: INFO: Pod "downwardapi-volume-742a4ce0-35e2-40e7-918a-824d450fb87e": Phase="Pending", Reason="", readiness=false. Elapsed: 66.98057ms Feb 13 23:39:10.146: INFO: Pod "downwardapi-volume-742a4ce0-35e2-40e7-918a-824d450fb87e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072262241s Feb 13 23:39:12.153: INFO: Pod "downwardapi-volume-742a4ce0-35e2-40e7-918a-824d450fb87e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079907131s Feb 13 23:39:14.197: INFO: Pod "downwardapi-volume-742a4ce0-35e2-40e7-918a-824d450fb87e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.123358237s Feb 13 23:39:16.206: INFO: Pod "downwardapi-volume-742a4ce0-35e2-40e7-918a-824d450fb87e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.132679188s Feb 13 23:39:18.217: INFO: Pod "downwardapi-volume-742a4ce0-35e2-40e7-918a-824d450fb87e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.143585814s STEP: Saw pod success Feb 13 23:39:18.217: INFO: Pod "downwardapi-volume-742a4ce0-35e2-40e7-918a-824d450fb87e" satisfied condition "success or failure" Feb 13 23:39:18.223: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-742a4ce0-35e2-40e7-918a-824d450fb87e container client-container: STEP: delete the pod Feb 13 23:39:18.369: INFO: Waiting for pod downwardapi-volume-742a4ce0-35e2-40e7-918a-824d450fb87e to disappear Feb 13 23:39:18.374: INFO: Pod downwardapi-volume-742a4ce0-35e2-40e7-918a-824d450fb87e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:39:18.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4333" for this suite. • [SLOW TEST:10.564 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":1,"skipped":5,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:39:18.392: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 13 23:39:18.969: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 13 23:39:20.989: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717233959, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717233959, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717233959, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717233958, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 23:39:23.001: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717233959, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717233959, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717233959, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717233958, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 23:39:25.004: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717233959, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717233959, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717233959, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717233958, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 13 23:39:28.033: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook Feb 13 23:39:28.069: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:39:28.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9450" for this suite. STEP: Destroying namespace "webhook-9450-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:9.925 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":280,"completed":2,"skipped":5,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:39:28.318: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88 Feb 13 23:39:28.481: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Feb 13 23:39:28.499: INFO: Waiting for terminating namespaces to be deleted... Feb 13 23:39:28.503: INFO: Logging pods the kubelet thinks is on node jerma-node before test Feb 13 23:39:28.522: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Feb 13 23:39:28.522: INFO: Container kube-proxy ready: true, restart count 0 Feb 13 23:39:28.522: INFO: sample-webhook-deployment-5f65f8c764-7qp2p from webhook-9450 started at 2020-02-13 23:39:19 +0000 UTC (1 container statuses recorded) Feb 13 23:39:28.522: INFO: Container sample-webhook ready: true, restart count 0 Feb 13 23:39:28.522: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Feb 13 23:39:28.522: INFO: Container weave ready: true, restart count 1 Feb 13 23:39:28.522: INFO: Container weave-npc ready: true, restart count 0 Feb 13 23:39:28.522: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Feb 13 23:39:28.581: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 13 23:39:28.581: INFO: Container coredns ready: true, restart count 0 Feb 13 23:39:28.581: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Feb 13 23:39:28.581: INFO: Container coredns ready: true, restart count 0 Feb 13 23:39:28.581: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 13 23:39:28.581: INFO: Container kube-controller-manager ready: true, restart count 7 Feb 13 23:39:28.581: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Feb 13 23:39:28.581: INFO: Container kube-proxy ready: true, restart count 0 Feb 13 23:39:28.581: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Feb 13 23:39:28.581: INFO: Container weave ready: true, restart count 0 Feb 13 23:39:28.581: INFO: Container weave-npc ready: true, restart count 0 Feb 13 23:39:28.581: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 13 23:39:28.581: INFO: Container kube-scheduler ready: true, restart count 11 Feb 13 23:39:28.581: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Feb 13 23:39:28.581: INFO: Container kube-apiserver ready: true, restart count 1 Feb 13 23:39:28.581: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Feb 13 23:39:28.581: INFO: Container etcd ready: true, restart count 1 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.15f31aa008270bab], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:39:29.778: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-3629" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":280,"completed":3,"skipped":24,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:39:29.798: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: validating cluster-info Feb 13 23:39:29.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' Feb 13 23:39:32.904: INFO: stderr: "" Feb 13 23:39:32.904: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:39:32.905: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5604" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":280,"completed":4,"skipped":25,"failed":0} SSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:39:32.922: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-974563b6-dbf3-4ac7-ae93-a53f43414008 in namespace container-probe-5050 Feb 13 23:39:41.199: INFO: Started pod liveness-974563b6-dbf3-4ac7-ae93-a53f43414008 in namespace container-probe-5050 STEP: checking the pod's current state and verifying that restartCount is present Feb 13 23:39:41.204: INFO: Initial restart count of pod liveness-974563b6-dbf3-4ac7-ae93-a53f43414008 is 0 Feb 13 23:40:07.593: INFO: Restart count of pod container-probe-5050/liveness-974563b6-dbf3-4ac7-ae93-a53f43414008 is now 1 (26.389757634s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:40:07.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5050" for this suite. • [SLOW TEST:34.748 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":5,"skipped":31,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:40:07.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir volume type on node default medium Feb 13 23:40:07.799: INFO: Waiting up to 5m0s for pod "pod-f5904165-574c-4b19-aeb4-6fb734e27e02" in namespace "emptydir-2209" to be "success or failure" Feb 13 23:40:07.937: INFO: Pod "pod-f5904165-574c-4b19-aeb4-6fb734e27e02": Phase="Pending", Reason="", readiness=false. Elapsed: 137.475995ms Feb 13 23:40:09.950: INFO: Pod "pod-f5904165-574c-4b19-aeb4-6fb734e27e02": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150672255s Feb 13 23:40:11.957: INFO: Pod "pod-f5904165-574c-4b19-aeb4-6fb734e27e02": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157521006s Feb 13 23:40:13.965: INFO: Pod "pod-f5904165-574c-4b19-aeb4-6fb734e27e02": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16586008s Feb 13 23:40:15.973: INFO: Pod "pod-f5904165-574c-4b19-aeb4-6fb734e27e02": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17377548s Feb 13 23:40:17.985: INFO: Pod "pod-f5904165-574c-4b19-aeb4-6fb734e27e02": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.185132179s STEP: Saw pod success Feb 13 23:40:17.985: INFO: Pod "pod-f5904165-574c-4b19-aeb4-6fb734e27e02" satisfied condition "success or failure" Feb 13 23:40:17.987: INFO: Trying to get logs from node jerma-node pod pod-f5904165-574c-4b19-aeb4-6fb734e27e02 container test-container: STEP: delete the pod Feb 13 23:40:18.025: INFO: Waiting for pod pod-f5904165-574c-4b19-aeb4-6fb734e27e02 to disappear Feb 13 23:40:18.048: INFO: Pod pod-f5904165-574c-4b19-aeb4-6fb734e27e02 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:40:18.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2209" for this suite. • [SLOW TEST:10.388 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":6,"skipped":48,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:40:18.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: getting the auto-created API token Feb 13 23:40:18.769: INFO: created pod pod-service-account-defaultsa Feb 13 23:40:18.769: INFO: pod pod-service-account-defaultsa service account token volume mount: true Feb 13 23:40:18.777: INFO: created pod pod-service-account-mountsa Feb 13 23:40:18.777: INFO: pod pod-service-account-mountsa service account token volume mount: true Feb 13 23:40:18.879: INFO: created pod pod-service-account-nomountsa Feb 13 23:40:18.880: INFO: pod pod-service-account-nomountsa service account token volume mount: false Feb 13 23:40:18.892: INFO: created pod pod-service-account-defaultsa-mountspec Feb 13 23:40:18.892: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true Feb 13 23:40:19.079: INFO: created pod pod-service-account-mountsa-mountspec Feb 13 23:40:19.079: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true Feb 13 23:40:19.114: INFO: created pod pod-service-account-nomountsa-mountspec Feb 13 23:40:19.114: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true Feb 13 23:40:19.121: INFO: created pod pod-service-account-defaultsa-nomountspec Feb 13 23:40:19.121: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false Feb 13 23:40:19.305: INFO: created pod pod-service-account-mountsa-nomountspec Feb 13 23:40:19.305: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false Feb 13 23:40:19.595: INFO: created pod pod-service-account-nomountsa-nomountspec Feb 13 23:40:19.595: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:40:19.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-4870" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":280,"completed":7,"skipped":63,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:40:19.688: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 13 23:40:22.263: INFO: Waiting up to 5m0s for pod "downwardapi-volume-13e9f0aa-2a28-4247-adb9-894df31ff048" in namespace "downward-api-9535" to be "success or failure" Feb 13 23:40:23.690: INFO: Pod "downwardapi-volume-13e9f0aa-2a28-4247-adb9-894df31ff048": Phase="Pending", Reason="", readiness=false. Elapsed: 1.426585s Feb 13 23:40:25.842: INFO: Pod "downwardapi-volume-13e9f0aa-2a28-4247-adb9-894df31ff048": Phase="Pending", Reason="", readiness=false. Elapsed: 3.578880889s Feb 13 23:40:27.851: INFO: Pod "downwardapi-volume-13e9f0aa-2a28-4247-adb9-894df31ff048": Phase="Pending", Reason="", readiness=false. Elapsed: 5.587823573s Feb 13 23:40:33.333: INFO: Pod "downwardapi-volume-13e9f0aa-2a28-4247-adb9-894df31ff048": Phase="Pending", Reason="", readiness=false. Elapsed: 11.069715297s Feb 13 23:40:36.598: INFO: Pod "downwardapi-volume-13e9f0aa-2a28-4247-adb9-894df31ff048": Phase="Pending", Reason="", readiness=false. Elapsed: 14.334756821s Feb 13 23:40:40.272: INFO: Pod "downwardapi-volume-13e9f0aa-2a28-4247-adb9-894df31ff048": Phase="Pending", Reason="", readiness=false. Elapsed: 18.008681704s Feb 13 23:40:42.778: INFO: Pod "downwardapi-volume-13e9f0aa-2a28-4247-adb9-894df31ff048": Phase="Pending", Reason="", readiness=false. Elapsed: 20.51427545s Feb 13 23:40:44.788: INFO: Pod "downwardapi-volume-13e9f0aa-2a28-4247-adb9-894df31ff048": Phase="Pending", Reason="", readiness=false. Elapsed: 22.525082672s Feb 13 23:40:46.796: INFO: Pod "downwardapi-volume-13e9f0aa-2a28-4247-adb9-894df31ff048": Phase="Pending", Reason="", readiness=false. Elapsed: 24.53225029s Feb 13 23:40:48.804: INFO: Pod "downwardapi-volume-13e9f0aa-2a28-4247-adb9-894df31ff048": Phase="Pending", Reason="", readiness=false. Elapsed: 26.540489393s Feb 13 23:40:50.808: INFO: Pod "downwardapi-volume-13e9f0aa-2a28-4247-adb9-894df31ff048": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.5443897s STEP: Saw pod success Feb 13 23:40:50.808: INFO: Pod "downwardapi-volume-13e9f0aa-2a28-4247-adb9-894df31ff048" satisfied condition "success or failure" Feb 13 23:40:50.811: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-13e9f0aa-2a28-4247-adb9-894df31ff048 container client-container: STEP: delete the pod Feb 13 23:40:50.857: INFO: Waiting for pod downwardapi-volume-13e9f0aa-2a28-4247-adb9-894df31ff048 to disappear Feb 13 23:40:50.862: INFO: Pod downwardapi-volume-13e9f0aa-2a28-4247-adb9-894df31ff048 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:40:50.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9535" for this suite. • [SLOW TEST:31.188 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36 should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":8,"skipped":78,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:40:50.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 13 23:40:51.978: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234051, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234051, loc:(*time.Location)(0x7e52ca0)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234051, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234051, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Feb 13 23:40:53.986: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234051, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234051, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234052, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234051, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 23:40:55.988: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234051, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234051, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234052, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234051, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 23:40:58.014: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234051, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234051, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234052, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234051, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 13 23:41:01.025: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:41:01.034: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7568" for this suite. STEP: Destroying namespace "webhook-7568-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.294 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":280,"completed":9,"skipped":84,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:41:01.171: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for deployment deletion to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0213 23:41:03.248842 9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. Feb 13 23:41:03.248: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:41:03.248: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-238" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":280,"completed":10,"skipped":85,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:41:03.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-ad6de2ba-169e-44b7-bcb1-3ae8eea34175 STEP: Creating a pod to test consume secrets Feb 13 23:41:04.782: INFO: Waiting up to 5m0s for pod "pod-secrets-fbb6e9f0-f033-4937-bbcb-9f7998050fee" in namespace "secrets-9953" to be "success or failure" Feb 13 23:41:04.894: INFO: Pod "pod-secrets-fbb6e9f0-f033-4937-bbcb-9f7998050fee": Phase="Pending", Reason="", readiness=false. Elapsed: 111.833279ms Feb 13 23:41:07.249: INFO: Pod "pod-secrets-fbb6e9f0-f033-4937-bbcb-9f7998050fee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.466471188s Feb 13 23:41:09.707: INFO: Pod "pod-secrets-fbb6e9f0-f033-4937-bbcb-9f7998050fee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.924001396s Feb 13 23:41:12.345: INFO: Pod "pod-secrets-fbb6e9f0-f033-4937-bbcb-9f7998050fee": Phase="Pending", Reason="", readiness=false. Elapsed: 7.562514616s Feb 13 23:41:14.801: INFO: Pod "pod-secrets-fbb6e9f0-f033-4937-bbcb-9f7998050fee": Phase="Pending", Reason="", readiness=false. Elapsed: 10.018395149s Feb 13 23:41:16.806: INFO: Pod "pod-secrets-fbb6e9f0-f033-4937-bbcb-9f7998050fee": Phase="Pending", Reason="", readiness=false. Elapsed: 12.023117411s Feb 13 23:41:18.813: INFO: Pod "pod-secrets-fbb6e9f0-f033-4937-bbcb-9f7998050fee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.030117735s STEP: Saw pod success Feb 13 23:41:18.813: INFO: Pod "pod-secrets-fbb6e9f0-f033-4937-bbcb-9f7998050fee" satisfied condition "success or failure" Feb 13 23:41:18.817: INFO: Trying to get logs from node jerma-node pod pod-secrets-fbb6e9f0-f033-4937-bbcb-9f7998050fee container secret-volume-test: STEP: delete the pod Feb 13 23:41:19.128: INFO: Waiting for pod pod-secrets-fbb6e9f0-f033-4937-bbcb-9f7998050fee to disappear Feb 13 23:41:19.133: INFO: Pod pod-secrets-fbb6e9f0-f033-4937-bbcb-9f7998050fee no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:41:19.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9953" for this suite. • [SLOW TEST:15.951 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":11,"skipped":94,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:41:19.212: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:41:26.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5052" for this suite. • [SLOW TEST:7.152 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":280,"completed":12,"skipped":94,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:41:26.364: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:42:26.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-9993" for this suite. • [SLOW TEST:60.148 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":280,"completed":13,"skipped":103,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:42:26.514: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods Feb 13 23:42:39.520: INFO: Successfully updated pod "adopt-release-5kgc7" STEP: Checking that the Job readopts the Pod Feb 13 23:42:39.521: INFO: Waiting up to 15m0s for pod "adopt-release-5kgc7" in namespace "job-3786" to be "adopted" Feb 13 23:42:39.529: INFO: Pod "adopt-release-5kgc7": Phase="Running", Reason="", readiness=true. Elapsed: 7.480965ms Feb 13 23:42:41.536: INFO: Pod "adopt-release-5kgc7": Phase="Running", Reason="", readiness=true. Elapsed: 2.014410507s Feb 13 23:42:41.536: INFO: Pod "adopt-release-5kgc7" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod Feb 13 23:42:42.050: INFO: Successfully updated pod "adopt-release-5kgc7" STEP: Checking that the Job releases the Pod Feb 13 23:42:42.050: INFO: Waiting up to 15m0s for pod "adopt-release-5kgc7" in namespace "job-3786" to be "released" Feb 13 23:42:42.072: INFO: Pod "adopt-release-5kgc7": Phase="Running", Reason="", readiness=true. Elapsed: 21.81437ms Feb 13 23:42:44.077: INFO: Pod "adopt-release-5kgc7": Phase="Running", Reason="", readiness=true. Elapsed: 2.027149624s Feb 13 23:42:44.077: INFO: Pod "adopt-release-5kgc7" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:42:44.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-3786" for this suite. • [SLOW TEST:17.572 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":280,"completed":14,"skipped":122,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:42:44.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test substitution in container's args Feb 13 23:42:44.539: INFO: Waiting up to 5m0s for pod "var-expansion-7a924dc8-325b-4018-bd5e-f208ae0cb780" in namespace "var-expansion-2791" to be "success or failure" Feb 13 23:42:44.547: INFO: Pod "var-expansion-7a924dc8-325b-4018-bd5e-f208ae0cb780": Phase="Pending", Reason="", readiness=false. Elapsed: 7.656143ms Feb 13 23:42:46.565: INFO: Pod "var-expansion-7a924dc8-325b-4018-bd5e-f208ae0cb780": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02565214s Feb 13 23:42:48.575: INFO: Pod "var-expansion-7a924dc8-325b-4018-bd5e-f208ae0cb780": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036638781s Feb 13 23:42:50.585: INFO: Pod "var-expansion-7a924dc8-325b-4018-bd5e-f208ae0cb780": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046471992s Feb 13 23:42:52.599: INFO: Pod "var-expansion-7a924dc8-325b-4018-bd5e-f208ae0cb780": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060212351s Feb 13 23:42:54.609: INFO: Pod "var-expansion-7a924dc8-325b-4018-bd5e-f208ae0cb780": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07034036s STEP: Saw pod success Feb 13 23:42:54.609: INFO: Pod "var-expansion-7a924dc8-325b-4018-bd5e-f208ae0cb780" satisfied condition "success or failure" Feb 13 23:42:54.615: INFO: Trying to get logs from node jerma-node pod var-expansion-7a924dc8-325b-4018-bd5e-f208ae0cb780 container dapi-container: STEP: delete the pod Feb 13 23:42:54.666: INFO: Waiting for pod var-expansion-7a924dc8-325b-4018-bd5e-f208ae0cb780 to disappear Feb 13 23:42:54.716: INFO: Pod var-expansion-7a924dc8-325b-4018-bd5e-f208ae0cb780 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:42:54.716: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-2791" for this suite. • [SLOW TEST:10.647 seconds] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":280,"completed":15,"skipped":130,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:42:54.735: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service externalname-service with the type=ExternalName in namespace services-4843 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-4843 I0213 23:42:54.908054 9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-4843, replica count: 2 I0213 23:42:57.959254 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 23:43:00.959985 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 23:43:03.961252 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0213 23:43:06.961932 9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Feb 13 23:43:06.962: INFO: Creating new exec pod Feb 13 23:43:15.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4843 execpoddmd27 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Feb 13 23:43:16.378: INFO: stderr: "I0213 23:43:16.193112 60 log.go:172] (0xc000912580) (0xc000900000) Create stream\nI0213 23:43:16.193234 60 log.go:172] (0xc000912580) (0xc000900000) Stream added, broadcasting: 1\nI0213 23:43:16.196807 60 log.go:172] (0xc000912580) Reply frame received for 1\nI0213 23:43:16.196843 60 log.go:172] (0xc000912580) (0xc00042a000) Create stream\nI0213 23:43:16.196852 60 log.go:172] (0xc000912580) (0xc00042a000) Stream added, broadcasting: 3\nI0213 23:43:16.198681 60 log.go:172] (0xc000912580) Reply frame received for 3\nI0213 23:43:16.198829 60 log.go:172] (0xc000912580) (0xc0009000a0) Create stream\nI0213 23:43:16.198855 60 log.go:172] (0xc000912580) (0xc0009000a0) Stream added, broadcasting: 5\nI0213 23:43:16.202014 60 log.go:172] (0xc000912580) Reply frame received for 5\nI0213 23:43:16.260774 60 log.go:172] (0xc000912580) Data frame received for 5\nI0213 23:43:16.260806 60 log.go:172] (0xc0009000a0) (5) Data frame handling\nI0213 23:43:16.260823 60 log.go:172] (0xc0009000a0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0213 23:43:16.267540 60 log.go:172] (0xc000912580) Data frame received for 5\nI0213 23:43:16.267587 60 log.go:172] (0xc0009000a0) (5) Data frame handling\nI0213 23:43:16.267622 60 log.go:172] (0xc0009000a0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0213 23:43:16.367312 60 log.go:172] (0xc000912580) Data frame received for 1\nI0213 23:43:16.367896 60 log.go:172] (0xc000912580) (0xc00042a000) Stream removed, broadcasting: 3\nI0213 23:43:16.368002 60 log.go:172] (0xc000900000) (1) Data frame handling\nI0213 23:43:16.368030 60 log.go:172] (0xc000900000) (1) Data frame sent\nI0213 23:43:16.368092 60 log.go:172] (0xc000912580) (0xc0009000a0) Stream removed, broadcasting: 5\nI0213 23:43:16.368123 60 log.go:172] (0xc000912580) (0xc000900000) Stream removed, broadcasting: 1\nI0213 23:43:16.368148 60 log.go:172] (0xc000912580) Go away received\nI0213 23:43:16.369361 60 log.go:172] (0xc000912580) (0xc000900000) Stream removed, broadcasting: 1\nI0213 23:43:16.369409 60 log.go:172] (0xc000912580) (0xc00042a000) Stream removed, broadcasting: 3\nI0213 23:43:16.369427 60 log.go:172] (0xc000912580) (0xc0009000a0) Stream removed, broadcasting: 5\n" Feb 13 23:43:16.378: INFO: stdout: "" Feb 13 23:43:16.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4843 execpoddmd27 -- /bin/sh -x -c nc -zv -t -w 2 10.96.163.3 80' Feb 13 23:43:16.742: INFO: stderr: "I0213 23:43:16.541521 81 log.go:172] (0xc000b8cb00) (0xc000aae320) Create stream\nI0213 23:43:16.541686 81 log.go:172] (0xc000b8cb00) (0xc000aae320) Stream added, broadcasting: 1\nI0213 23:43:16.549127 81 log.go:172] (0xc000b8cb00) Reply frame received for 1\nI0213 23:43:16.549224 81 log.go:172] (0xc000b8cb00) (0xc000b760a0) Create stream\nI0213 23:43:16.549241 81 log.go:172] (0xc000b8cb00) (0xc000b760a0) Stream added, broadcasting: 3\nI0213 23:43:16.550590 81 log.go:172] (0xc000b8cb00) Reply frame received for 3\nI0213 23:43:16.550640 81 log.go:172] (0xc000b8cb00) (0xc000a30280) Create stream\nI0213 23:43:16.550674 81 log.go:172] (0xc000b8cb00) (0xc000a30280) Stream added, broadcasting: 5\nI0213 23:43:16.551855 81 log.go:172] (0xc000b8cb00) Reply frame received for 5\nI0213 23:43:16.629606 81 log.go:172] (0xc000b8cb00) Data frame received for 5\nI0213 23:43:16.629713 81 log.go:172] (0xc000a30280) (5) Data frame handling\nI0213 23:43:16.629733 81 log.go:172] (0xc000a30280) (5) Data frame sent\nI0213 23:43:16.629742 81 log.go:172] (0xc000b8cb00) Data frame received for 5\nI0213 23:43:16.629751 81 log.go:172] (0xc000a30280) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.163.3 80\nConnection to 10.96.163.3 80 port [tcp/http] succeeded!\nI0213 23:43:16.629841 81 log.go:172] (0xc000a30280) (5) Data frame sent\nI0213 23:43:16.731995 81 log.go:172] (0xc000b8cb00) Data frame received for 1\nI0213 23:43:16.732086 81 log.go:172] (0xc000aae320) (1) Data frame handling\nI0213 23:43:16.732109 81 log.go:172] (0xc000b8cb00) (0xc000b760a0) Stream removed, broadcasting: 3\nI0213 23:43:16.732153 81 log.go:172] (0xc000b8cb00) (0xc000a30280) Stream removed, broadcasting: 5\nI0213 23:43:16.732160 81 log.go:172] (0xc000aae320) (1) Data frame sent\nI0213 23:43:16.732171 81 log.go:172] (0xc000b8cb00) (0xc000aae320) Stream removed, broadcasting: 1\nI0213 23:43:16.732207 81 log.go:172] (0xc000b8cb00) Go away received\nI0213 23:43:16.732907 81 log.go:172] (0xc000b8cb00) (0xc000aae320) Stream removed, broadcasting: 1\nI0213 23:43:16.732923 81 log.go:172] (0xc000b8cb00) (0xc000b760a0) Stream removed, broadcasting: 3\nI0213 23:43:16.732932 81 log.go:172] (0xc000b8cb00) (0xc000a30280) Stream removed, broadcasting: 5\n" Feb 13 23:43:16.742: INFO: stdout: "" Feb 13 23:43:16.742: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:43:16.802: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4843" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:22.087 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":280,"completed":16,"skipped":143,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:43:16.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 13 23:43:18.091: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 13 23:43:20.108: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 23:43:22.117: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 23:43:24.114: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 23:43:26.439: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 23:43:28.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 23:43:30.121: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 23:43:32.116: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234198, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 13 23:43:35.145: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:43:47.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6164" for this suite. STEP: Destroying namespace "webhook-6164-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:30.883 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":280,"completed":17,"skipped":164,"failed":0} SSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:43:47.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test hostPath mode Feb 13 23:43:47.791: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-9663" to be "success or failure" Feb 13 23:43:47.797: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 5.758606ms Feb 13 23:43:49.808: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017414011s Feb 13 23:43:51.845: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05355257s Feb 13 23:43:53.855: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06431085s Feb 13 23:43:55.870: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078474446s Feb 13 23:43:57.880: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.088676155s Feb 13 23:43:59.895: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.104419039s STEP: Saw pod success Feb 13 23:43:59.896: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" Feb 13 23:43:59.902: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: STEP: delete the pod Feb 13 23:44:00.613: INFO: Waiting for pod pod-host-path-test to disappear Feb 13 23:44:00.623: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:44:00.623: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-9663" for this suite. • [SLOW TEST:13.058 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":18,"skipped":169,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:44:00.768: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 13 23:44:01.135: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"3d0f07d1-28e6-48b9-962a-768b282af9e9", Controller:(*bool)(0xc0010cac02), BlockOwnerDeletion:(*bool)(0xc0010cac03)}} Feb 13 23:44:01.146: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"75ac0f83-f95d-4901-a10c-5894b655d38f", Controller:(*bool)(0xc0010cad8a), BlockOwnerDeletion:(*bool)(0xc0010cad8b)}} Feb 13 23:44:01.165: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"e3dc6f14-4678-4eb8-b55b-d4218fec07b1", Controller:(*bool)(0xc0010caf1a), BlockOwnerDeletion:(*bool)(0xc0010caf1b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:44:06.204: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7228" for this suite. • [SLOW TEST:5.462 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":280,"completed":19,"skipped":186,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:44:06.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 13 23:44:06.444: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d54978e0-2f08-44c2-b782-f31344875017" in namespace "projected-1627" to be "success or failure" Feb 13 23:44:06.528: INFO: Pod "downwardapi-volume-d54978e0-2f08-44c2-b782-f31344875017": Phase="Pending", Reason="", readiness=false. Elapsed: 84.338611ms Feb 13 23:44:08.539: INFO: Pod "downwardapi-volume-d54978e0-2f08-44c2-b782-f31344875017": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095186465s Feb 13 23:44:10.892: INFO: Pod "downwardapi-volume-d54978e0-2f08-44c2-b782-f31344875017": Phase="Pending", Reason="", readiness=false. Elapsed: 4.447990655s Feb 13 23:44:12.900: INFO: Pod "downwardapi-volume-d54978e0-2f08-44c2-b782-f31344875017": Phase="Pending", Reason="", readiness=false. Elapsed: 6.455729472s Feb 13 23:44:14.907: INFO: Pod "downwardapi-volume-d54978e0-2f08-44c2-b782-f31344875017": Phase="Pending", Reason="", readiness=false. Elapsed: 8.463413967s Feb 13 23:44:16.926: INFO: Pod "downwardapi-volume-d54978e0-2f08-44c2-b782-f31344875017": Phase="Pending", Reason="", readiness=false. Elapsed: 10.481727666s Feb 13 23:44:18.935: INFO: Pod "downwardapi-volume-d54978e0-2f08-44c2-b782-f31344875017": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.491019448s STEP: Saw pod success Feb 13 23:44:18.936: INFO: Pod "downwardapi-volume-d54978e0-2f08-44c2-b782-f31344875017" satisfied condition "success or failure" Feb 13 23:44:18.946: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d54978e0-2f08-44c2-b782-f31344875017 container client-container: STEP: delete the pod Feb 13 23:44:19.354: INFO: Waiting for pod downwardapi-volume-d54978e0-2f08-44c2-b782-f31344875017 to disappear Feb 13 23:44:19.366: INFO: Pod downwardapi-volume-d54978e0-2f08-44c2-b782-f31344875017 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:44:19.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1627" for this suite. • [SLOW TEST:13.156 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":20,"skipped":187,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:44:19.387: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: executing a command with run --rm and attach with stdin Feb 13 23:44:19.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-7722 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Feb 13 23:44:27.898: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0213 23:44:26.889893 101 log.go:172] (0xc0000f9550) (0xc000691ae0) Create stream\nI0213 23:44:26.890077 101 log.go:172] (0xc0000f9550) (0xc000691ae0) Stream added, broadcasting: 1\nI0213 23:44:26.899614 101 log.go:172] (0xc0000f9550) Reply frame received for 1\nI0213 23:44:26.899685 101 log.go:172] (0xc0000f9550) (0xc00072e000) Create stream\nI0213 23:44:26.899704 101 log.go:172] (0xc0000f9550) (0xc00072e000) Stream added, broadcasting: 3\nI0213 23:44:26.902536 101 log.go:172] (0xc0000f9550) Reply frame received for 3\nI0213 23:44:26.902662 101 log.go:172] (0xc0000f9550) (0xc000742000) Create stream\nI0213 23:44:26.902682 101 log.go:172] (0xc0000f9550) (0xc000742000) Stream added, broadcasting: 5\nI0213 23:44:26.904693 101 log.go:172] (0xc0000f9550) Reply frame received for 5\nI0213 23:44:26.904719 101 log.go:172] (0xc0000f9550) (0xc000691b80) Create stream\nI0213 23:44:26.904726 101 log.go:172] (0xc0000f9550) (0xc000691b80) Stream added, broadcasting: 7\nI0213 23:44:26.907593 101 log.go:172] (0xc0000f9550) Reply frame received for 7\nI0213 23:44:26.907902 101 log.go:172] (0xc00072e000) (3) Writing data frame\nI0213 23:44:26.908097 101 log.go:172] (0xc00072e000) (3) Writing data frame\nI0213 23:44:26.920732 101 log.go:172] (0xc0000f9550) Data frame received for 5\nI0213 23:44:26.920751 101 log.go:172] (0xc000742000) (5) Data frame handling\nI0213 23:44:26.920775 101 log.go:172] (0xc000742000) (5) Data frame sent\nI0213 23:44:26.934824 101 log.go:172] (0xc0000f9550) Data frame received for 5\nI0213 23:44:26.934864 101 log.go:172] (0xc000742000) (5) Data frame handling\nI0213 23:44:26.934898 101 log.go:172] (0xc000742000) (5) Data frame sent\nI0213 23:44:27.801365 101 log.go:172] (0xc0000f9550) Data frame received for 1\nI0213 23:44:27.801598 101 log.go:172] (0xc000691ae0) (1) Data frame handling\nI0213 23:44:27.801693 101 log.go:172] (0xc000691ae0) (1) Data frame sent\nI0213 23:44:27.805507 101 log.go:172] (0xc0000f9550) (0xc000691ae0) Stream removed, broadcasting: 1\nI0213 23:44:27.806038 101 log.go:172] (0xc0000f9550) (0xc00072e000) Stream removed, broadcasting: 3\nI0213 23:44:27.806751 101 log.go:172] (0xc0000f9550) (0xc000742000) Stream removed, broadcasting: 5\nI0213 23:44:27.806859 101 log.go:172] (0xc0000f9550) (0xc000691b80) Stream removed, broadcasting: 7\nI0213 23:44:27.806908 101 log.go:172] (0xc0000f9550) (0xc000691ae0) Stream removed, broadcasting: 1\nI0213 23:44:27.806960 101 log.go:172] (0xc0000f9550) (0xc00072e000) Stream removed, broadcasting: 3\nI0213 23:44:27.807033 101 log.go:172] (0xc0000f9550) (0xc000742000) Stream removed, broadcasting: 5\nI0213 23:44:27.807048 101 log.go:172] (0xc0000f9550) (0xc000691b80) Stream removed, broadcasting: 7\nI0213 23:44:27.807135 101 log.go:172] (0xc0000f9550) Go away received\n" Feb 13 23:44:27.898: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:44:29.912: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7722" for this suite. • [SLOW TEST:10.542 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1946 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":280,"completed":21,"skipped":188,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:44:29.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should patch a Namespace [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a Namespace STEP: patching the Namespace STEP: get the Namespace and ensuring it has the label [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:44:30.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3719" for this suite. STEP: Destroying namespace "nspatchtest-effd41bd-9d49-4d70-9c86-a77d4594ffc0-3819" for this suite. •{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should patch a Namespace [Conformance]","total":280,"completed":22,"skipped":211,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:44:30.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with secret that has name projected-secret-test-1834d64d-015c-4bac-9fd8-a189999ed42a STEP: Creating a pod to test consume secrets Feb 13 23:44:30.313: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-6d8f5a69-a903-4bb2-b49c-17f2f1a132b7" in namespace "projected-5215" to be "success or failure" Feb 13 23:44:30.338: INFO: Pod "pod-projected-secrets-6d8f5a69-a903-4bb2-b49c-17f2f1a132b7": Phase="Pending", Reason="", readiness=false. Elapsed: 24.498237ms Feb 13 23:44:34.776: INFO: Pod "pod-projected-secrets-6d8f5a69-a903-4bb2-b49c-17f2f1a132b7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.462127747s Feb 13 23:44:36.785: INFO: Pod "pod-projected-secrets-6d8f5a69-a903-4bb2-b49c-17f2f1a132b7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.471597801s Feb 13 23:44:38.799: INFO: Pod "pod-projected-secrets-6d8f5a69-a903-4bb2-b49c-17f2f1a132b7": Phase="Pending", Reason="", readiness=false. Elapsed: 8.485532513s Feb 13 23:44:40.809: INFO: Pod "pod-projected-secrets-6d8f5a69-a903-4bb2-b49c-17f2f1a132b7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.495445465s STEP: Saw pod success Feb 13 23:44:40.809: INFO: Pod "pod-projected-secrets-6d8f5a69-a903-4bb2-b49c-17f2f1a132b7" satisfied condition "success or failure" Feb 13 23:44:40.814: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-6d8f5a69-a903-4bb2-b49c-17f2f1a132b7 container projected-secret-volume-test: STEP: delete the pod Feb 13 23:44:40.858: INFO: Waiting for pod pod-projected-secrets-6d8f5a69-a903-4bb2-b49c-17f2f1a132b7 to disappear Feb 13 23:44:41.014: INFO: Pod pod-projected-secrets-6d8f5a69-a903-4bb2-b49c-17f2f1a132b7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:44:41.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5215" for this suite. • [SLOW TEST:10.944 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":23,"skipped":223,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:44:41.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 13 23:44:41.768: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 13 23:44:43.816: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234281, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234281, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234281, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234281, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 23:44:45.917: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234281, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234281, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234281, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234281, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 13 23:44:47.826: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234281, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234281, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234281, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717234281, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 13 23:44:50.865: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:44:51.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7730" for this suite. STEP: Destroying namespace "webhook-7730-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:10.083 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":280,"completed":24,"skipped":236,"failed":0} SSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:44:51.186: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:45:02.344: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4773" for this suite. • [SLOW TEST:11.201 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":280,"completed":25,"skipped":240,"failed":0} SS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:45:02.388: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:45:13.662: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3735" for this suite. • [SLOW TEST:11.287 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":280,"completed":26,"skipped":242,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:45:13.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. Feb 13 23:45:13.852: INFO: Number of nodes with available pods: 0 Feb 13 23:45:13.853: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:14.872: INFO: Number of nodes with available pods: 0 Feb 13 23:45:14.872: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:15.868: INFO: Number of nodes with available pods: 0 Feb 13 23:45:15.868: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:16.954: INFO: Number of nodes with available pods: 0 Feb 13 23:45:16.954: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:17.879: INFO: Number of nodes with available pods: 0 Feb 13 23:45:17.880: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:20.741: INFO: Number of nodes with available pods: 0 Feb 13 23:45:20.741: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:21.137: INFO: Number of nodes with available pods: 0 Feb 13 23:45:21.137: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:22.720: INFO: Number of nodes with available pods: 0 Feb 13 23:45:22.720: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:22.988: INFO: Number of nodes with available pods: 0 Feb 13 23:45:22.988: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:23.880: INFO: Number of nodes with available pods: 2 Feb 13 23:45:23.880: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. Feb 13 23:45:23.991: INFO: Number of nodes with available pods: 1 Feb 13 23:45:23.991: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:25.010: INFO: Number of nodes with available pods: 1 Feb 13 23:45:25.011: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:26.001: INFO: Number of nodes with available pods: 1 Feb 13 23:45:26.001: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:26.999: INFO: Number of nodes with available pods: 1 Feb 13 23:45:26.999: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:28.002: INFO: Number of nodes with available pods: 1 Feb 13 23:45:28.002: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:29.016: INFO: Number of nodes with available pods: 1 Feb 13 23:45:29.016: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:30.008: INFO: Number of nodes with available pods: 1 Feb 13 23:45:30.008: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:31.010: INFO: Number of nodes with available pods: 1 Feb 13 23:45:31.011: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:32.008: INFO: Number of nodes with available pods: 1 Feb 13 23:45:32.008: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:33.005: INFO: Number of nodes with available pods: 1 Feb 13 23:45:33.005: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:34.001: INFO: Number of nodes with available pods: 1 Feb 13 23:45:34.001: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:35.004: INFO: Number of nodes with available pods: 1 Feb 13 23:45:35.004: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:36.003: INFO: Number of nodes with available pods: 1 Feb 13 23:45:36.003: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:37.011: INFO: Number of nodes with available pods: 1 Feb 13 23:45:37.012: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:38.005: INFO: Number of nodes with available pods: 1 Feb 13 23:45:38.005: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:45:39.007: INFO: Number of nodes with available pods: 2 Feb 13 23:45:39.008: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-9227, will wait for the garbage collector to delete the pods Feb 13 23:45:39.104: INFO: Deleting DaemonSet.extensions daemon-set took: 37.471864ms Feb 13 23:45:39.405: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.80447ms Feb 13 23:45:53.211: INFO: Number of nodes with available pods: 0 Feb 13 23:45:53.211: INFO: Number of running nodes: 0, number of available pods: 0 Feb 13 23:45:53.218: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-9227/daemonsets","resourceVersion":"8262314"},"items":null} Feb 13 23:45:53.222: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-9227/pods","resourceVersion":"8262314"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:45:53.233: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-9227" for this suite. • [SLOW TEST:39.582 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":280,"completed":27,"skipped":246,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:45:53.260: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-5787 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating stateful set ss in namespace statefulset-5787 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-5787 Feb 13 23:45:53.387: INFO: Found 0 stateful pods, waiting for 1 Feb 13 23:46:03.397: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Feb 13 23:46:03.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 13 23:46:03.950: INFO: stderr: "I0213 23:46:03.694114 127 log.go:172] (0xc000114b00) (0xc00093a0a0) Create stream\nI0213 23:46:03.694284 127 log.go:172] (0xc000114b00) (0xc00093a0a0) Stream added, broadcasting: 1\nI0213 23:46:03.699320 127 log.go:172] (0xc000114b00) Reply frame received for 1\nI0213 23:46:03.699371 127 log.go:172] (0xc000114b00) (0xc0006bbc20) Create stream\nI0213 23:46:03.699389 127 log.go:172] (0xc000114b00) (0xc0006bbc20) Stream added, broadcasting: 3\nI0213 23:46:03.700710 127 log.go:172] (0xc000114b00) Reply frame received for 3\nI0213 23:46:03.700743 127 log.go:172] (0xc000114b00) (0xc0006bbe00) Create stream\nI0213 23:46:03.700757 127 log.go:172] (0xc000114b00) (0xc0006bbe00) Stream added, broadcasting: 5\nI0213 23:46:03.702682 127 log.go:172] (0xc000114b00) Reply frame received for 5\nI0213 23:46:03.780483 127 log.go:172] (0xc000114b00) Data frame received for 5\nI0213 23:46:03.780740 127 log.go:172] (0xc0006bbe00) (5) Data frame handling\nI0213 23:46:03.780856 127 log.go:172] (0xc0006bbe00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0213 23:46:03.807889 127 log.go:172] (0xc000114b00) Data frame received for 3\nI0213 23:46:03.807955 127 log.go:172] (0xc0006bbc20) (3) Data frame handling\nI0213 23:46:03.807994 127 log.go:172] (0xc0006bbc20) (3) Data frame sent\nI0213 23:46:03.934390 127 log.go:172] (0xc000114b00) (0xc0006bbc20) Stream removed, broadcasting: 3\nI0213 23:46:03.935026 127 log.go:172] (0xc000114b00) Data frame received for 1\nI0213 23:46:03.935193 127 log.go:172] (0xc00093a0a0) (1) Data frame handling\nI0213 23:46:03.935242 127 log.go:172] (0xc00093a0a0) (1) Data frame sent\nI0213 23:46:03.935274 127 log.go:172] (0xc000114b00) (0xc0006bbe00) Stream removed, broadcasting: 5\nI0213 23:46:03.935479 127 log.go:172] (0xc000114b00) (0xc00093a0a0) Stream removed, broadcasting: 1\nI0213 23:46:03.935693 127 log.go:172] (0xc000114b00) Go away received\nI0213 23:46:03.937257 127 log.go:172] (0xc000114b00) (0xc00093a0a0) Stream removed, broadcasting: 1\nI0213 23:46:03.937316 127 log.go:172] (0xc000114b00) (0xc0006bbc20) Stream removed, broadcasting: 3\nI0213 23:46:03.937344 127 log.go:172] (0xc000114b00) (0xc0006bbe00) Stream removed, broadcasting: 5\n" Feb 13 23:46:03.950: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 13 23:46:03.950: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 13 23:46:03.958: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true Feb 13 23:46:13.968: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 13 23:46:13.968: INFO: Waiting for statefulset status.replicas updated to 0 Feb 13 23:46:14.005: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 23:46:14.006: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:04 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC }] Feb 13 23:46:14.006: INFO: Feb 13 23:46:14.006: INFO: StatefulSet ss has not reached scale 3, at 1 Feb 13 23:46:15.138: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.975586649s Feb 13 23:46:16.448: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.843090911s Feb 13 23:46:17.455: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.533416111s Feb 13 23:46:18.465: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.525916932s Feb 13 23:46:19.701: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.516063367s Feb 13 23:46:21.804: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.279945532s Feb 13 23:46:22.813: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.176995417s Feb 13 23:46:23.821: INFO: Verifying statefulset ss doesn't scale past 3 for another 168.651802ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-5787 Feb 13 23:46:24.830: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:46:25.265: INFO: stderr: "I0213 23:46:25.094777 146 log.go:172] (0xc000b88f20) (0xc000bd6280) Create stream\nI0213 23:46:25.094876 146 log.go:172] (0xc000b88f20) (0xc000bd6280) Stream added, broadcasting: 1\nI0213 23:46:25.100229 146 log.go:172] (0xc000b88f20) Reply frame received for 1\nI0213 23:46:25.100268 146 log.go:172] (0xc000b88f20) (0xc000b801e0) Create stream\nI0213 23:46:25.100292 146 log.go:172] (0xc000b88f20) (0xc000b801e0) Stream added, broadcasting: 3\nI0213 23:46:25.101826 146 log.go:172] (0xc000b88f20) Reply frame received for 3\nI0213 23:46:25.101878 146 log.go:172] (0xc000b88f20) (0xc000bd6320) Create stream\nI0213 23:46:25.101891 146 log.go:172] (0xc000b88f20) (0xc000bd6320) Stream added, broadcasting: 5\nI0213 23:46:25.103131 146 log.go:172] (0xc000b88f20) Reply frame received for 5\nI0213 23:46:25.182767 146 log.go:172] (0xc000b88f20) Data frame received for 3\nI0213 23:46:25.183326 146 log.go:172] (0xc000b801e0) (3) Data frame handling\nI0213 23:46:25.183363 146 log.go:172] (0xc000b801e0) (3) Data frame sent\nI0213 23:46:25.183486 146 log.go:172] (0xc000b88f20) Data frame received for 5\nI0213 23:46:25.183507 146 log.go:172] (0xc000bd6320) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0213 23:46:25.183592 146 log.go:172] (0xc000bd6320) (5) Data frame sent\nI0213 23:46:25.252789 146 log.go:172] (0xc000b88f20) Data frame received for 1\nI0213 23:46:25.252864 146 log.go:172] (0xc000b88f20) (0xc000b801e0) Stream removed, broadcasting: 3\nI0213 23:46:25.252989 146 log.go:172] (0xc000bd6280) (1) Data frame handling\nI0213 23:46:25.253017 146 log.go:172] (0xc000b88f20) (0xc000bd6320) Stream removed, broadcasting: 5\nI0213 23:46:25.253047 146 log.go:172] (0xc000bd6280) (1) Data frame sent\nI0213 23:46:25.253057 146 log.go:172] (0xc000b88f20) (0xc000bd6280) Stream removed, broadcasting: 1\nI0213 23:46:25.253081 146 log.go:172] (0xc000b88f20) Go away received\nI0213 23:46:25.254040 146 log.go:172] (0xc000b88f20) (0xc000bd6280) Stream removed, broadcasting: 1\nI0213 23:46:25.254050 146 log.go:172] (0xc000b88f20) (0xc000b801e0) Stream removed, broadcasting: 3\nI0213 23:46:25.254055 146 log.go:172] (0xc000b88f20) (0xc000bd6320) Stream removed, broadcasting: 5\n" Feb 13 23:46:25.265: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 13 23:46:25.265: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 13 23:46:25.265: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:46:25.814: INFO: stderr: "I0213 23:46:25.638272 166 log.go:172] (0xc000abe9a0) (0xc0006fc0a0) Create stream\nI0213 23:46:25.638404 166 log.go:172] (0xc000abe9a0) (0xc0006fc0a0) Stream added, broadcasting: 1\nI0213 23:46:25.642175 166 log.go:172] (0xc000abe9a0) Reply frame received for 1\nI0213 23:46:25.642240 166 log.go:172] (0xc000abe9a0) (0xc00070fcc0) Create stream\nI0213 23:46:25.642263 166 log.go:172] (0xc000abe9a0) (0xc00070fcc0) Stream added, broadcasting: 3\nI0213 23:46:25.643546 166 log.go:172] (0xc000abe9a0) Reply frame received for 3\nI0213 23:46:25.643573 166 log.go:172] (0xc000abe9a0) (0xc0006188c0) Create stream\nI0213 23:46:25.643584 166 log.go:172] (0xc000abe9a0) (0xc0006188c0) Stream added, broadcasting: 5\nI0213 23:46:25.645986 166 log.go:172] (0xc000abe9a0) Reply frame received for 5\nI0213 23:46:25.723961 166 log.go:172] (0xc000abe9a0) Data frame received for 3\nI0213 23:46:25.724004 166 log.go:172] (0xc00070fcc0) (3) Data frame handling\nI0213 23:46:25.724021 166 log.go:172] (0xc00070fcc0) (3) Data frame sent\nI0213 23:46:25.724060 166 log.go:172] (0xc000abe9a0) Data frame received for 5\nI0213 23:46:25.724068 166 log.go:172] (0xc0006188c0) (5) Data frame handling\nI0213 23:46:25.724078 166 log.go:172] (0xc0006188c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0213 23:46:25.802124 166 log.go:172] (0xc000abe9a0) Data frame received for 1\nI0213 23:46:25.802236 166 log.go:172] (0xc000abe9a0) (0xc00070fcc0) Stream removed, broadcasting: 3\nI0213 23:46:25.802333 166 log.go:172] (0xc0006fc0a0) (1) Data frame handling\nI0213 23:46:25.802371 166 log.go:172] (0xc0006fc0a0) (1) Data frame sent\nI0213 23:46:25.802397 166 log.go:172] (0xc000abe9a0) (0xc0006fc0a0) Stream removed, broadcasting: 1\nI0213 23:46:25.803248 166 log.go:172] (0xc000abe9a0) (0xc0006188c0) Stream removed, broadcasting: 5\nI0213 23:46:25.803459 166 log.go:172] (0xc000abe9a0) (0xc0006fc0a0) Stream removed, broadcasting: 1\nI0213 23:46:25.803471 166 log.go:172] (0xc000abe9a0) (0xc00070fcc0) Stream removed, broadcasting: 3\nI0213 23:46:25.803481 166 log.go:172] (0xc000abe9a0) (0xc0006188c0) Stream removed, broadcasting: 5\nI0213 23:46:25.803817 166 log.go:172] (0xc000abe9a0) Go away received\n" Feb 13 23:46:25.814: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 13 23:46:25.814: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 13 23:46:25.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:46:26.154: INFO: stderr: "I0213 23:46:26.001386 186 log.go:172] (0xc000a0e000) (0xc00051d4a0) Create stream\nI0213 23:46:26.001580 186 log.go:172] (0xc000a0e000) (0xc00051d4a0) Stream added, broadcasting: 1\nI0213 23:46:26.013032 186 log.go:172] (0xc000a0e000) Reply frame received for 1\nI0213 23:46:26.013124 186 log.go:172] (0xc000a0e000) (0xc000abe0a0) Create stream\nI0213 23:46:26.013142 186 log.go:172] (0xc000a0e000) (0xc000abe0a0) Stream added, broadcasting: 3\nI0213 23:46:26.014627 186 log.go:172] (0xc000a0e000) Reply frame received for 3\nI0213 23:46:26.014653 186 log.go:172] (0xc000a0e000) (0xc0007edea0) Create stream\nI0213 23:46:26.014659 186 log.go:172] (0xc000a0e000) (0xc0007edea0) Stream added, broadcasting: 5\nI0213 23:46:26.017041 186 log.go:172] (0xc000a0e000) Reply frame received for 5\nI0213 23:46:26.077958 186 log.go:172] (0xc000a0e000) Data frame received for 3\nI0213 23:46:26.078068 186 log.go:172] (0xc000abe0a0) (3) Data frame handling\nI0213 23:46:26.078160 186 log.go:172] (0xc000a0e000) Data frame received for 5\nI0213 23:46:26.078192 186 log.go:172] (0xc0007edea0) (5) Data frame handling\nI0213 23:46:26.078218 186 log.go:172] (0xc0007edea0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0213 23:46:26.078245 186 log.go:172] (0xc000a0e000) Data frame received for 5\nI0213 23:46:26.078255 186 log.go:172] (0xc0007edea0) (5) Data frame handling\nI0213 23:46:26.078265 186 log.go:172] (0xc0007edea0) (5) Data frame sent\nI0213 23:46:26.078281 186 log.go:172] (0xc000abe0a0) (3) Data frame sent\n+ true\nI0213 23:46:26.145587 186 log.go:172] (0xc000a0e000) Data frame received for 1\nI0213 23:46:26.145737 186 log.go:172] (0xc000a0e000) (0xc0007edea0) Stream removed, broadcasting: 5\nI0213 23:46:26.145822 186 log.go:172] (0xc00051d4a0) (1) Data frame handling\nI0213 23:46:26.145887 186 log.go:172] (0xc00051d4a0) (1) Data frame sent\nI0213 23:46:26.146172 186 log.go:172] (0xc000a0e000) (0xc000abe0a0) Stream removed, broadcasting: 3\nI0213 23:46:26.146320 186 log.go:172] (0xc000a0e000) (0xc00051d4a0) Stream removed, broadcasting: 1\nI0213 23:46:26.146360 186 log.go:172] (0xc000a0e000) Go away received\nI0213 23:46:26.147253 186 log.go:172] (0xc000a0e000) (0xc00051d4a0) Stream removed, broadcasting: 1\nI0213 23:46:26.147293 186 log.go:172] (0xc000a0e000) (0xc000abe0a0) Stream removed, broadcasting: 3\nI0213 23:46:26.147308 186 log.go:172] (0xc000a0e000) (0xc0007edea0) Stream removed, broadcasting: 5\n" Feb 13 23:46:26.154: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Feb 13 23:46:26.154: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Feb 13 23:46:26.161: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Feb 13 23:46:26.161: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Feb 13 23:46:26.161: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Feb 13 23:46:26.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 13 23:46:26.485: INFO: stderr: "I0213 23:46:26.307040 208 log.go:172] (0xc000bc3970) (0xc0009ea8c0) Create stream\nI0213 23:46:26.307223 208 log.go:172] (0xc000bc3970) (0xc0009ea8c0) Stream added, broadcasting: 1\nI0213 23:46:26.316326 208 log.go:172] (0xc000bc3970) Reply frame received for 1\nI0213 23:46:26.316416 208 log.go:172] (0xc000bc3970) (0xc00061a820) Create stream\nI0213 23:46:26.316426 208 log.go:172] (0xc000bc3970) (0xc00061a820) Stream added, broadcasting: 3\nI0213 23:46:26.317752 208 log.go:172] (0xc000bc3970) Reply frame received for 3\nI0213 23:46:26.317805 208 log.go:172] (0xc000bc3970) (0xc0004834a0) Create stream\nI0213 23:46:26.317824 208 log.go:172] (0xc000bc3970) (0xc0004834a0) Stream added, broadcasting: 5\nI0213 23:46:26.318990 208 log.go:172] (0xc000bc3970) Reply frame received for 5\nI0213 23:46:26.379513 208 log.go:172] (0xc000bc3970) Data frame received for 5\nI0213 23:46:26.379579 208 log.go:172] (0xc0004834a0) (5) Data frame handling\nI0213 23:46:26.379605 208 log.go:172] (0xc0004834a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0213 23:46:26.382178 208 log.go:172] (0xc000bc3970) Data frame received for 3\nI0213 23:46:26.382197 208 log.go:172] (0xc00061a820) (3) Data frame handling\nI0213 23:46:26.382217 208 log.go:172] (0xc00061a820) (3) Data frame sent\nI0213 23:46:26.458215 208 log.go:172] (0xc000bc3970) Data frame received for 1\nI0213 23:46:26.458386 208 log.go:172] (0xc0009ea8c0) (1) Data frame handling\nI0213 23:46:26.458434 208 log.go:172] (0xc0009ea8c0) (1) Data frame sent\nI0213 23:46:26.459735 208 log.go:172] (0xc000bc3970) (0xc0004834a0) Stream removed, broadcasting: 5\nI0213 23:46:26.460382 208 log.go:172] (0xc000bc3970) (0xc0009ea8c0) Stream removed, broadcasting: 1\nI0213 23:46:26.460595 208 log.go:172] (0xc000bc3970) (0xc00061a820) Stream removed, broadcasting: 3\nI0213 23:46:26.460698 208 log.go:172] (0xc000bc3970) Go away received\nI0213 23:46:26.462028 208 log.go:172] (0xc000bc3970) (0xc0009ea8c0) Stream removed, broadcasting: 1\nI0213 23:46:26.462051 208 log.go:172] (0xc000bc3970) (0xc00061a820) Stream removed, broadcasting: 3\nI0213 23:46:26.462059 208 log.go:172] (0xc000bc3970) (0xc0004834a0) Stream removed, broadcasting: 5\n" Feb 13 23:46:26.485: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 13 23:46:26.485: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 13 23:46:26.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 13 23:46:27.112: INFO: stderr: "I0213 23:46:26.805065 230 log.go:172] (0xc0009d6000) (0xc000648000) Create stream\nI0213 23:46:26.805472 230 log.go:172] (0xc0009d6000) (0xc000648000) Stream added, broadcasting: 1\nI0213 23:46:26.812496 230 log.go:172] (0xc0009d6000) Reply frame received for 1\nI0213 23:46:26.812571 230 log.go:172] (0xc0009d6000) (0xc00003a0a0) Create stream\nI0213 23:46:26.812590 230 log.go:172] (0xc0009d6000) (0xc00003a0a0) Stream added, broadcasting: 3\nI0213 23:46:26.814528 230 log.go:172] (0xc0009d6000) Reply frame received for 3\nI0213 23:46:26.814581 230 log.go:172] (0xc0009d6000) (0xc0007c0000) Create stream\nI0213 23:46:26.814591 230 log.go:172] (0xc0009d6000) (0xc0007c0000) Stream added, broadcasting: 5\nI0213 23:46:26.816217 230 log.go:172] (0xc0009d6000) Reply frame received for 5\nI0213 23:46:26.939859 230 log.go:172] (0xc0009d6000) Data frame received for 5\nI0213 23:46:26.939957 230 log.go:172] (0xc0007c0000) (5) Data frame handling\nI0213 23:46:26.939980 230 log.go:172] (0xc0007c0000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0213 23:46:26.994978 230 log.go:172] (0xc0009d6000) Data frame received for 3\nI0213 23:46:26.995057 230 log.go:172] (0xc00003a0a0) (3) Data frame handling\nI0213 23:46:26.995083 230 log.go:172] (0xc00003a0a0) (3) Data frame sent\nI0213 23:46:27.105914 230 log.go:172] (0xc0009d6000) (0xc00003a0a0) Stream removed, broadcasting: 3\nI0213 23:46:27.106145 230 log.go:172] (0xc0009d6000) Data frame received for 1\nI0213 23:46:27.106186 230 log.go:172] (0xc000648000) (1) Data frame handling\nI0213 23:46:27.106227 230 log.go:172] (0xc000648000) (1) Data frame sent\nI0213 23:46:27.106264 230 log.go:172] (0xc0009d6000) (0xc0007c0000) Stream removed, broadcasting: 5\nI0213 23:46:27.106303 230 log.go:172] (0xc0009d6000) (0xc000648000) Stream removed, broadcasting: 1\nI0213 23:46:27.106329 230 log.go:172] (0xc0009d6000) Go away received\nI0213 23:46:27.107063 230 log.go:172] (0xc0009d6000) (0xc000648000) Stream removed, broadcasting: 1\nI0213 23:46:27.107072 230 log.go:172] (0xc0009d6000) (0xc00003a0a0) Stream removed, broadcasting: 3\nI0213 23:46:27.107076 230 log.go:172] (0xc0009d6000) (0xc0007c0000) Stream removed, broadcasting: 5\n" Feb 13 23:46:27.113: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 13 23:46:27.113: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 13 23:46:27.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Feb 13 23:46:27.448: INFO: stderr: "I0213 23:46:27.245782 245 log.go:172] (0xc0000fc2c0) (0xc0008f4000) Create stream\nI0213 23:46:27.245872 245 log.go:172] (0xc0000fc2c0) (0xc0008f4000) Stream added, broadcasting: 1\nI0213 23:46:27.250279 245 log.go:172] (0xc0000fc2c0) Reply frame received for 1\nI0213 23:46:27.250306 245 log.go:172] (0xc0000fc2c0) (0xc000747680) Create stream\nI0213 23:46:27.250314 245 log.go:172] (0xc0000fc2c0) (0xc000747680) Stream added, broadcasting: 3\nI0213 23:46:27.251699 245 log.go:172] (0xc0000fc2c0) Reply frame received for 3\nI0213 23:46:27.251790 245 log.go:172] (0xc0000fc2c0) (0xc000642960) Create stream\nI0213 23:46:27.251810 245 log.go:172] (0xc0000fc2c0) (0xc000642960) Stream added, broadcasting: 5\nI0213 23:46:27.253396 245 log.go:172] (0xc0000fc2c0) Reply frame received for 5\nI0213 23:46:27.316176 245 log.go:172] (0xc0000fc2c0) Data frame received for 5\nI0213 23:46:27.316215 245 log.go:172] (0xc000642960) (5) Data frame handling\nI0213 23:46:27.316234 245 log.go:172] (0xc000642960) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0213 23:46:27.359078 245 log.go:172] (0xc0000fc2c0) Data frame received for 3\nI0213 23:46:27.359243 245 log.go:172] (0xc000747680) (3) Data frame handling\nI0213 23:46:27.359328 245 log.go:172] (0xc000747680) (3) Data frame sent\nI0213 23:46:27.439374 245 log.go:172] (0xc0000fc2c0) Data frame received for 1\nI0213 23:46:27.439472 245 log.go:172] (0xc0000fc2c0) (0xc000642960) Stream removed, broadcasting: 5\nI0213 23:46:27.439532 245 log.go:172] (0xc0008f4000) (1) Data frame handling\nI0213 23:46:27.439552 245 log.go:172] (0xc0008f4000) (1) Data frame sent\nI0213 23:46:27.439566 245 log.go:172] (0xc0000fc2c0) (0xc000747680) Stream removed, broadcasting: 3\nI0213 23:46:27.439592 245 log.go:172] (0xc0000fc2c0) (0xc0008f4000) Stream removed, broadcasting: 1\nI0213 23:46:27.439620 245 log.go:172] (0xc0000fc2c0) Go away received\nI0213 23:46:27.440552 245 log.go:172] (0xc0000fc2c0) (0xc0008f4000) Stream removed, broadcasting: 1\nI0213 23:46:27.440572 245 log.go:172] (0xc0000fc2c0) (0xc000747680) Stream removed, broadcasting: 3\nI0213 23:46:27.440583 245 log.go:172] (0xc0000fc2c0) (0xc000642960) Stream removed, broadcasting: 5\n" Feb 13 23:46:27.448: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Feb 13 23:46:27.448: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Feb 13 23:46:27.448: INFO: Waiting for statefulset status.replicas updated to 0 Feb 13 23:46:27.460: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Feb 13 23:46:37.471: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Feb 13 23:46:37.471: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Feb 13 23:46:37.471: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Feb 13 23:46:37.489: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 23:46:37.490: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC }] Feb 13 23:46:37.490: INFO: ss-1 jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:37.490: INFO: ss-2 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:37.490: INFO: Feb 13 23:46:37.490: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 23:46:39.198: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 23:46:39.198: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC }] Feb 13 23:46:39.198: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:39.198: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:39.198: INFO: Feb 13 23:46:39.198: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 23:46:40.204: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 23:46:40.204: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC }] Feb 13 23:46:40.204: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:40.204: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:40.204: INFO: Feb 13 23:46:40.204: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 23:46:41.215: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 23:46:41.215: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC }] Feb 13 23:46:41.215: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:41.215: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:41.215: INFO: Feb 13 23:46:41.215: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 23:46:42.223: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 23:46:42.223: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC }] Feb 13 23:46:42.223: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:42.223: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:42.223: INFO: Feb 13 23:46:42.223: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 23:46:43.254: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 23:46:43.254: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC }] Feb 13 23:46:43.255: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:43.255: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:43.255: INFO: Feb 13 23:46:43.255: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 23:46:44.264: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 23:46:44.264: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC }] Feb 13 23:46:44.264: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:44.264: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:44.264: INFO: Feb 13 23:46:44.264: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 23:46:45.273: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 23:46:45.273: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC }] Feb 13 23:46:45.273: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:45.273: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:45.274: INFO: Feb 13 23:46:45.274: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 23:46:46.282: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 23:46:46.283: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC }] Feb 13 23:46:46.283: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:46.283: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:46.283: INFO: Feb 13 23:46:46.283: INFO: StatefulSet ss has not reached scale 0, at 3 Feb 13 23:46:47.292: INFO: POD NODE PHASE GRACE CONDITIONS Feb 13 23:46:47.293: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:45:53 +0000 UTC }] Feb 13 23:46:47.293: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:47.293: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:27 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-13 23:46:14 +0000 UTC }] Feb 13 23:46:47.293: INFO: Feb 13 23:46:47.293: INFO: StatefulSet ss has not reached scale 0, at 3 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-5787 Feb 13 23:46:48.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:46:48.522: INFO: rc: 1 Feb 13 23:46:48.523: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Feb 13 23:46:58.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:46:58.700: INFO: rc: 1 Feb 13 23:46:58.701: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:47:08.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:47:08.793: INFO: rc: 1 Feb 13 23:47:08.794: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:47:18.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:47:19.067: INFO: rc: 1 Feb 13 23:47:19.067: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:47:29.073: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:47:29.233: INFO: rc: 1 Feb 13 23:47:29.234: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:47:39.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:47:39.390: INFO: rc: 1 Feb 13 23:47:39.390: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:47:49.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:47:49.547: INFO: rc: 1 Feb 13 23:47:49.547: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:47:59.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:47:59.707: INFO: rc: 1 Feb 13 23:47:59.708: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:48:09.708: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:48:09.916: INFO: rc: 1 Feb 13 23:48:09.917: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:48:19.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:48:20.075: INFO: rc: 1 Feb 13 23:48:20.076: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:48:30.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:48:30.244: INFO: rc: 1 Feb 13 23:48:30.244: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:48:40.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:48:40.410: INFO: rc: 1 Feb 13 23:48:40.410: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:48:50.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:48:50.656: INFO: rc: 1 Feb 13 23:48:50.656: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:49:00.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:49:00.780: INFO: rc: 1 Feb 13 23:49:00.780: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:49:10.781: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:49:10.932: INFO: rc: 1 Feb 13 23:49:10.933: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:49:20.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:49:21.057: INFO: rc: 1 Feb 13 23:49:21.057: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:49:31.058: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:49:31.393: INFO: rc: 1 Feb 13 23:49:31.393: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:49:41.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:49:43.600: INFO: rc: 1 Feb 13 23:49:43.600: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:49:53.602: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:49:53.759: INFO: rc: 1 Feb 13 23:49:53.760: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:50:03.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:50:03.923: INFO: rc: 1 Feb 13 23:50:03.923: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:50:13.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:50:14.359: INFO: rc: 1 Feb 13 23:50:14.359: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:50:24.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:50:24.516: INFO: rc: 1 Feb 13 23:50:24.517: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:50:34.518: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:50:34.672: INFO: rc: 1 Feb 13 23:50:34.672: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:50:44.673: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:50:44.801: INFO: rc: 1 Feb 13 23:50:44.801: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:50:54.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:50:54.976: INFO: rc: 1 Feb 13 23:50:54.977: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:51:04.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:51:05.086: INFO: rc: 1 Feb 13 23:51:05.086: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:51:15.087: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:51:15.224: INFO: rc: 1 Feb 13 23:51:15.224: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:51:25.225: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:51:25.380: INFO: rc: 1 Feb 13 23:51:25.380: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:51:35.380: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:51:35.558: INFO: rc: 1 Feb 13 23:51:35.559: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:51:45.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:51:45.671: INFO: rc: 1 Feb 13 23:51:45.671: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Feb 13 23:51:55.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-5787 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Feb 13 23:51:55.830: INFO: rc: 1 Feb 13 23:51:55.831: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Feb 13 23:51:55.831: INFO: Scaling statefulset ss to 0 Feb 13 23:51:55.868: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 13 23:51:55.872: INFO: Deleting all statefulset in ns statefulset-5787 Feb 13 23:51:55.876: INFO: Scaling statefulset ss to 0 Feb 13 23:51:55.889: INFO: Waiting for statefulset status.replicas updated to 0 Feb 13 23:51:55.897: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:51:55.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5787" for this suite. • [SLOW TEST:362.689 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":280,"completed":28,"skipped":270,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:51:55.950: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:52:09.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1733" for this suite. • [SLOW TEST:13.452 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":280,"completed":29,"skipped":270,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:52:09.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Feb 13 23:52:09.536: INFO: Waiting up to 5m0s for pod "downward-api-defa4afa-7989-4d3b-9e8b-7775c6315a45" in namespace "downward-api-4223" to be "success or failure" Feb 13 23:52:09.568: INFO: Pod "downward-api-defa4afa-7989-4d3b-9e8b-7775c6315a45": Phase="Pending", Reason="", readiness=false. Elapsed: 31.029655ms Feb 13 23:52:11.574: INFO: Pod "downward-api-defa4afa-7989-4d3b-9e8b-7775c6315a45": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037709015s Feb 13 23:52:13.584: INFO: Pod "downward-api-defa4afa-7989-4d3b-9e8b-7775c6315a45": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047339698s Feb 13 23:52:15.592: INFO: Pod "downward-api-defa4afa-7989-4d3b-9e8b-7775c6315a45": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055608821s Feb 13 23:52:17.603: INFO: Pod "downward-api-defa4afa-7989-4d3b-9e8b-7775c6315a45": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.065983448s STEP: Saw pod success Feb 13 23:52:17.603: INFO: Pod "downward-api-defa4afa-7989-4d3b-9e8b-7775c6315a45" satisfied condition "success or failure" Feb 13 23:52:17.608: INFO: Trying to get logs from node jerma-node pod downward-api-defa4afa-7989-4d3b-9e8b-7775c6315a45 container dapi-container: STEP: delete the pod Feb 13 23:52:17.689: INFO: Waiting for pod downward-api-defa4afa-7989-4d3b-9e8b-7775c6315a45 to disappear Feb 13 23:52:17.695: INFO: Pod downward-api-defa4afa-7989-4d3b-9e8b-7775c6315a45 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:52:17.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4223" for this suite. • [SLOW TEST:8.305 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":280,"completed":30,"skipped":293,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:52:17.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap configmap-9559/configmap-test-2b59995b-d493-40b1-a091-540857c4a26b STEP: Creating a pod to test consume configMaps Feb 13 23:52:17.834: INFO: Waiting up to 5m0s for pod "pod-configmaps-c05d7cb4-4920-4300-a516-0c18f4a0084e" in namespace "configmap-9559" to be "success or failure" Feb 13 23:52:17.928: INFO: Pod "pod-configmaps-c05d7cb4-4920-4300-a516-0c18f4a0084e": Phase="Pending", Reason="", readiness=false. Elapsed: 93.827278ms Feb 13 23:52:19.937: INFO: Pod "pod-configmaps-c05d7cb4-4920-4300-a516-0c18f4a0084e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102875449s Feb 13 23:52:21.945: INFO: Pod "pod-configmaps-c05d7cb4-4920-4300-a516-0c18f4a0084e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110179451s Feb 13 23:52:23.954: INFO: Pod "pod-configmaps-c05d7cb4-4920-4300-a516-0c18f4a0084e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11923001s Feb 13 23:52:25.963: INFO: Pod "pod-configmaps-c05d7cb4-4920-4300-a516-0c18f4a0084e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128673209s Feb 13 23:52:27.971: INFO: Pod "pod-configmaps-c05d7cb4-4920-4300-a516-0c18f4a0084e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.136674947s STEP: Saw pod success Feb 13 23:52:27.971: INFO: Pod "pod-configmaps-c05d7cb4-4920-4300-a516-0c18f4a0084e" satisfied condition "success or failure" Feb 13 23:52:27.975: INFO: Trying to get logs from node jerma-node pod pod-configmaps-c05d7cb4-4920-4300-a516-0c18f4a0084e container env-test: STEP: delete the pod Feb 13 23:52:28.054: INFO: Waiting for pod pod-configmaps-c05d7cb4-4920-4300-a516-0c18f4a0084e to disappear Feb 13 23:52:28.060: INFO: Pod pod-configmaps-c05d7cb4-4920-4300-a516-0c18f4a0084e no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:52:28.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9559" for this suite. • [SLOW TEST:10.375 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":280,"completed":31,"skipped":313,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:52:28.091: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 13 23:52:28.263: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. Feb 13 23:52:28.371: INFO: Number of nodes with available pods: 0 Feb 13 23:52:28.372: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. Feb 13 23:52:28.418: INFO: Number of nodes with available pods: 0 Feb 13 23:52:28.418: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:29.584: INFO: Number of nodes with available pods: 0 Feb 13 23:52:29.584: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:30.458: INFO: Number of nodes with available pods: 0 Feb 13 23:52:30.458: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:31.650: INFO: Number of nodes with available pods: 0 Feb 13 23:52:31.650: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:32.427: INFO: Number of nodes with available pods: 0 Feb 13 23:52:32.427: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:33.855: INFO: Number of nodes with available pods: 0 Feb 13 23:52:33.856: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:34.425: INFO: Number of nodes with available pods: 0 Feb 13 23:52:34.425: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:35.426: INFO: Number of nodes with available pods: 0 Feb 13 23:52:35.426: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:36.426: INFO: Number of nodes with available pods: 1 Feb 13 23:52:36.426: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled Feb 13 23:52:36.559: INFO: Number of nodes with available pods: 1 Feb 13 23:52:36.560: INFO: Number of running nodes: 0, number of available pods: 1 Feb 13 23:52:37.568: INFO: Number of nodes with available pods: 0 Feb 13 23:52:37.568: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate Feb 13 23:52:37.582: INFO: Number of nodes with available pods: 0 Feb 13 23:52:37.582: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:38.594: INFO: Number of nodes with available pods: 0 Feb 13 23:52:38.595: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:39.595: INFO: Number of nodes with available pods: 0 Feb 13 23:52:39.595: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:40.594: INFO: Number of nodes with available pods: 0 Feb 13 23:52:40.594: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:41.590: INFO: Number of nodes with available pods: 0 Feb 13 23:52:41.590: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:42.595: INFO: Number of nodes with available pods: 0 Feb 13 23:52:42.595: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:43.591: INFO: Number of nodes with available pods: 0 Feb 13 23:52:43.591: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:44.592: INFO: Number of nodes with available pods: 0 Feb 13 23:52:44.592: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:45.594: INFO: Number of nodes with available pods: 0 Feb 13 23:52:45.595: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:46.598: INFO: Number of nodes with available pods: 0 Feb 13 23:52:46.598: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:47.591: INFO: Number of nodes with available pods: 0 Feb 13 23:52:47.591: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:48.600: INFO: Number of nodes with available pods: 0 Feb 13 23:52:48.600: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:49.589: INFO: Number of nodes with available pods: 0 Feb 13 23:52:49.589: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:50.595: INFO: Number of nodes with available pods: 0 Feb 13 23:52:50.595: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:51.616: INFO: Number of nodes with available pods: 0 Feb 13 23:52:51.616: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:52.595: INFO: Number of nodes with available pods: 0 Feb 13 23:52:52.595: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:53.594: INFO: Number of nodes with available pods: 0 Feb 13 23:52:53.594: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:54.588: INFO: Number of nodes with available pods: 0 Feb 13 23:52:54.589: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:55.591: INFO: Number of nodes with available pods: 0 Feb 13 23:52:55.591: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:56.817: INFO: Number of nodes with available pods: 0 Feb 13 23:52:56.817: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:57.593: INFO: Number of nodes with available pods: 0 Feb 13 23:52:57.593: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:58.591: INFO: Number of nodes with available pods: 0 Feb 13 23:52:58.591: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:52:59.597: INFO: Number of nodes with available pods: 1 Feb 13 23:52:59.597: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3640, will wait for the garbage collector to delete the pods Feb 13 23:52:59.660: INFO: Deleting DaemonSet.extensions daemon-set took: 6.30383ms Feb 13 23:52:59.960: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.579597ms Feb 13 23:53:13.167: INFO: Number of nodes with available pods: 0 Feb 13 23:53:13.167: INFO: Number of running nodes: 0, number of available pods: 0 Feb 13 23:53:13.172: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3640/daemonsets","resourceVersion":"8263629"},"items":null} Feb 13 23:53:13.176: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3640/pods","resourceVersion":"8263629"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:53:13.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3640" for this suite. • [SLOW TEST:45.169 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":280,"completed":32,"skipped":346,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:53:13.261: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53 [It] should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod liveness-d746d8f3-5cbd-40a1-b4df-2cc900b57016 in namespace container-probe-302 Feb 13 23:53:19.474: INFO: Started pod liveness-d746d8f3-5cbd-40a1-b4df-2cc900b57016 in namespace container-probe-302 STEP: checking the pod's current state and verifying that restartCount is present Feb 13 23:53:19.480: INFO: Initial restart count of pod liveness-d746d8f3-5cbd-40a1-b4df-2cc900b57016 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:57:20.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-302" for this suite. • [SLOW TEST:246.761 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a tcp:8080 liveness probe [NodeConformance] [Conformance]","total":280,"completed":33,"skipped":365,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:57:20.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1634 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 13 23:57:20.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2729' Feb 13 23:57:20.297: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 13 23:57:20.298: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc Feb 13 23:57:20.426: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-qwff4] Feb 13 23:57:20.426: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-qwff4" in namespace "kubectl-2729" to be "running and ready" Feb 13 23:57:20.435: INFO: Pod "e2e-test-httpd-rc-qwff4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.825534ms Feb 13 23:57:22.442: INFO: Pod "e2e-test-httpd-rc-qwff4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015258864s Feb 13 23:57:24.452: INFO: Pod "e2e-test-httpd-rc-qwff4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025127159s Feb 13 23:57:26.463: INFO: Pod "e2e-test-httpd-rc-qwff4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036710906s Feb 13 23:57:28.487: INFO: Pod "e2e-test-httpd-rc-qwff4": Phase="Running", Reason="", readiness=true. Elapsed: 8.060091938s Feb 13 23:57:28.487: INFO: Pod "e2e-test-httpd-rc-qwff4" satisfied condition "running and ready" Feb 13 23:57:28.487: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-qwff4] Feb 13 23:57:28.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-2729' Feb 13 23:57:28.661: INFO: stderr: "" Feb 13 23:57:28.662: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Thu Feb 13 23:57:27.946482 2020] [mpm_event:notice] [pid 1:tid 140701518408552] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu Feb 13 23:57:27.946564 2020] [core:notice] [pid 1:tid 140701518408552] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1639 Feb 13 23:57:28.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2729' Feb 13 23:57:28.777: INFO: stderr: "" Feb 13 23:57:28.777: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:57:28.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2729" for this suite. • [SLOW TEST:8.763 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1630 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":280,"completed":34,"skipped":382,"failed":0} SS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:57:28.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-8c2e40eb-e48b-4a8d-8a6d-d8a61c1eaf83 STEP: Creating a pod to test consume secrets Feb 13 23:57:28.847: INFO: Waiting up to 5m0s for pod "pod-secrets-9290c6ab-f36b-4ca4-abb0-824c0d108dd6" in namespace "secrets-5467" to be "success or failure" Feb 13 23:57:28.852: INFO: Pod "pod-secrets-9290c6ab-f36b-4ca4-abb0-824c0d108dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.610185ms Feb 13 23:57:30.862: INFO: Pod "pod-secrets-9290c6ab-f36b-4ca4-abb0-824c0d108dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014631009s Feb 13 23:57:32.871: INFO: Pod "pod-secrets-9290c6ab-f36b-4ca4-abb0-824c0d108dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023744533s Feb 13 23:57:34.885: INFO: Pod "pod-secrets-9290c6ab-f36b-4ca4-abb0-824c0d108dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037606454s Feb 13 23:57:36.892: INFO: Pod "pod-secrets-9290c6ab-f36b-4ca4-abb0-824c0d108dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045060643s Feb 13 23:57:38.901: INFO: Pod "pod-secrets-9290c6ab-f36b-4ca4-abb0-824c0d108dd6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.054428167s Feb 13 23:57:40.922: INFO: Pod "pod-secrets-9290c6ab-f36b-4ca4-abb0-824c0d108dd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.075096995s STEP: Saw pod success Feb 13 23:57:40.922: INFO: Pod "pod-secrets-9290c6ab-f36b-4ca4-abb0-824c0d108dd6" satisfied condition "success or failure" Feb 13 23:57:40.926: INFO: Trying to get logs from node jerma-node pod pod-secrets-9290c6ab-f36b-4ca4-abb0-824c0d108dd6 container secret-volume-test: STEP: delete the pod Feb 13 23:57:40.983: INFO: Waiting for pod pod-secrets-9290c6ab-f36b-4ca4-abb0-824c0d108dd6 to disappear Feb 13 23:57:41.116: INFO: Pod pod-secrets-9290c6ab-f36b-4ca4-abb0-824c0d108dd6 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:57:41.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5467" for this suite. • [SLOW TEST:12.349 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":35,"skipped":384,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:57:41.139: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating pod pod-subpath-test-configmap-hk7m STEP: Creating a pod to test atomic-volume-subpath Feb 13 23:57:41.297: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-hk7m" in namespace "subpath-9501" to be "success or failure" Feb 13 23:57:41.315: INFO: Pod "pod-subpath-test-configmap-hk7m": Phase="Pending", Reason="", readiness=false. Elapsed: 18.755645ms Feb 13 23:57:43.322: INFO: Pod "pod-subpath-test-configmap-hk7m": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025287694s Feb 13 23:57:45.330: INFO: Pod "pod-subpath-test-configmap-hk7m": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033517214s Feb 13 23:57:47.344: INFO: Pod "pod-subpath-test-configmap-hk7m": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047540294s Feb 13 23:57:49.354: INFO: Pod "pod-subpath-test-configmap-hk7m": Phase="Running", Reason="", readiness=true. Elapsed: 8.057839913s Feb 13 23:57:51.359: INFO: Pod "pod-subpath-test-configmap-hk7m": Phase="Running", Reason="", readiness=true. Elapsed: 10.062515724s Feb 13 23:57:53.373: INFO: Pod "pod-subpath-test-configmap-hk7m": Phase="Running", Reason="", readiness=true. Elapsed: 12.076132765s Feb 13 23:57:55.380: INFO: Pod "pod-subpath-test-configmap-hk7m": Phase="Running", Reason="", readiness=true. Elapsed: 14.08328593s Feb 13 23:57:57.390: INFO: Pod "pod-subpath-test-configmap-hk7m": Phase="Running", Reason="", readiness=true. Elapsed: 16.093818724s Feb 13 23:57:59.399: INFO: Pod "pod-subpath-test-configmap-hk7m": Phase="Running", Reason="", readiness=true. Elapsed: 18.102558946s Feb 13 23:58:01.408: INFO: Pod "pod-subpath-test-configmap-hk7m": Phase="Running", Reason="", readiness=true. Elapsed: 20.111766318s Feb 13 23:58:03.414: INFO: Pod "pod-subpath-test-configmap-hk7m": Phase="Running", Reason="", readiness=true. Elapsed: 22.117253824s Feb 13 23:58:05.420: INFO: Pod "pod-subpath-test-configmap-hk7m": Phase="Running", Reason="", readiness=true. Elapsed: 24.123228709s Feb 13 23:58:07.428: INFO: Pod "pod-subpath-test-configmap-hk7m": Phase="Running", Reason="", readiness=true. Elapsed: 26.131082673s Feb 13 23:58:09.444: INFO: Pod "pod-subpath-test-configmap-hk7m": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.147571476s STEP: Saw pod success Feb 13 23:58:09.445: INFO: Pod "pod-subpath-test-configmap-hk7m" satisfied condition "success or failure" Feb 13 23:58:09.453: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-hk7m container test-container-subpath-configmap-hk7m: STEP: delete the pod Feb 13 23:58:10.095: INFO: Waiting for pod pod-subpath-test-configmap-hk7m to disappear Feb 13 23:58:10.104: INFO: Pod pod-subpath-test-configmap-hk7m no longer exists STEP: Deleting pod pod-subpath-test-configmap-hk7m Feb 13 23:58:10.104: INFO: Deleting pod "pod-subpath-test-configmap-hk7m" in namespace "subpath-9501" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:58:10.116: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9501" for this suite. • [SLOW TEST:29.009 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":280,"completed":36,"skipped":404,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:58:10.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override all Feb 13 23:58:10.585: INFO: Waiting up to 5m0s for pod "client-containers-8a102a0c-9f69-430e-a99b-519ad4004306" in namespace "containers-5327" to be "success or failure" Feb 13 23:58:10.613: INFO: Pod "client-containers-8a102a0c-9f69-430e-a99b-519ad4004306": Phase="Pending", Reason="", readiness=false. Elapsed: 27.953187ms Feb 13 23:58:12.623: INFO: Pod "client-containers-8a102a0c-9f69-430e-a99b-519ad4004306": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03821503s Feb 13 23:58:14.746: INFO: Pod "client-containers-8a102a0c-9f69-430e-a99b-519ad4004306": Phase="Pending", Reason="", readiness=false. Elapsed: 4.160701766s Feb 13 23:58:16.753: INFO: Pod "client-containers-8a102a0c-9f69-430e-a99b-519ad4004306": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167694305s Feb 13 23:58:18.760: INFO: Pod "client-containers-8a102a0c-9f69-430e-a99b-519ad4004306": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.174965331s STEP: Saw pod success Feb 13 23:58:18.760: INFO: Pod "client-containers-8a102a0c-9f69-430e-a99b-519ad4004306" satisfied condition "success or failure" Feb 13 23:58:18.764: INFO: Trying to get logs from node jerma-node pod client-containers-8a102a0c-9f69-430e-a99b-519ad4004306 container test-container: STEP: delete the pod Feb 13 23:58:18.814: INFO: Waiting for pod client-containers-8a102a0c-9f69-430e-a99b-519ad4004306 to disappear Feb 13 23:58:18.831: INFO: Pod client-containers-8a102a0c-9f69-430e-a99b-519ad4004306 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:58:18.831: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-5327" for this suite. • [SLOW TEST:8.697 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":280,"completed":37,"skipped":421,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:58:18.852: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 13 23:58:18.976: INFO: Create a RollingUpdate DaemonSet Feb 13 23:58:18.981: INFO: Check that daemon pods launch on every node of the cluster Feb 13 23:58:21.909: INFO: Number of nodes with available pods: 0 Feb 13 23:58:21.910: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:58:23.034: INFO: Number of nodes with available pods: 0 Feb 13 23:58:23.034: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:58:25.392: INFO: Number of nodes with available pods: 0 Feb 13 23:58:25.393: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:58:25.956: INFO: Number of nodes with available pods: 0 Feb 13 23:58:25.956: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:58:26.933: INFO: Number of nodes with available pods: 0 Feb 13 23:58:26.933: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:58:31.322: INFO: Number of nodes with available pods: 0 Feb 13 23:58:31.322: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:58:32.473: INFO: Number of nodes with available pods: 0 Feb 13 23:58:32.473: INFO: Node jerma-node is running more than one daemon pod Feb 13 23:58:32.922: INFO: Number of nodes with available pods: 1 Feb 13 23:58:32.922: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:58:33.931: INFO: Number of nodes with available pods: 1 Feb 13 23:58:33.931: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod Feb 13 23:58:34.945: INFO: Number of nodes with available pods: 2 Feb 13 23:58:34.945: INFO: Number of running nodes: 2, number of available pods: 2 Feb 13 23:58:34.945: INFO: Update the DaemonSet to trigger a rollout Feb 13 23:58:34.954: INFO: Updating DaemonSet daemon-set Feb 13 23:58:44.128: INFO: Roll back the DaemonSet before rollout is complete Feb 13 23:58:44.440: INFO: Updating DaemonSet daemon-set Feb 13 23:58:44.440: INFO: Make sure DaemonSet rollback is complete Feb 13 23:58:44.480: INFO: Wrong image for pod: daemon-set-plxwv. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 13 23:58:44.480: INFO: Pod daemon-set-plxwv is not available Feb 13 23:58:45.519: INFO: Wrong image for pod: daemon-set-plxwv. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 13 23:58:45.519: INFO: Pod daemon-set-plxwv is not available Feb 13 23:58:46.533: INFO: Wrong image for pod: daemon-set-plxwv. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 13 23:58:46.533: INFO: Pod daemon-set-plxwv is not available Feb 13 23:58:47.518: INFO: Wrong image for pod: daemon-set-plxwv. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 13 23:58:47.518: INFO: Pod daemon-set-plxwv is not available Feb 13 23:58:49.180: INFO: Wrong image for pod: daemon-set-plxwv. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 13 23:58:49.180: INFO: Pod daemon-set-plxwv is not available Feb 13 23:58:49.814: INFO: Wrong image for pod: daemon-set-plxwv. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Feb 13 23:58:49.814: INFO: Pod daemon-set-plxwv is not available Feb 13 23:58:50.553: INFO: Pod daemon-set-xp5r2 is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5184, will wait for the garbage collector to delete the pods Feb 13 23:58:50.641: INFO: Deleting DaemonSet.extensions daemon-set took: 6.909889ms Feb 13 23:58:51.042: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.954451ms Feb 13 23:58:57.260: INFO: Number of nodes with available pods: 0 Feb 13 23:58:57.260: INFO: Number of running nodes: 0, number of available pods: 0 Feb 13 23:58:57.266: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5184/daemonsets","resourceVersion":"8264617"},"items":null} Feb 13 23:58:57.269: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5184/pods","resourceVersion":"8264617"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:58:57.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5184" for this suite. • [SLOW TEST:38.439 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":280,"completed":38,"skipped":470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:58:57.291: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Feb 13 23:58:57.334: INFO: PodSpec: initContainers in spec.initContainers Feb 13 23:59:55.205: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-2f9fde33-fa9c-4248-8ae9-e03cb9e26053", GenerateName:"", Namespace:"init-container-3883", SelfLink:"/api/v1/namespaces/init-container-3883/pods/pod-init-2f9fde33-fa9c-4248-8ae9-e03cb9e26053", UID:"7b37c093-c388-43a5-a047-77176d820cd9", ResourceVersion:"8264802", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63717235137, loc:(*time.Location)(0x7e52ca0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"334530380"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-mpqhd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc002784240), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mpqhd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mpqhd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-mpqhd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc00271a068), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc002554720), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00271a100)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc00271a120)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc00271a128), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc00271a12c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235137, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235137, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235137, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235137, loc:(*time.Location)(0x7e52ca0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc0024c80a0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002d14070)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002d140e0)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://c164149d8dcfd6a1c5379d4528f27d9a904ab219d9bb4bade130b2f2e26678a2", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024c80e0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0024c80c0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00271a1df)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 13 23:59:55.208: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-3883" for this suite. • [SLOW TEST:57.933 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":280,"completed":39,"skipped":500,"failed":0} S ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 13 23:59:55.225: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name cm-test-opt-del-ac15a086-c30e-4e02-876e-51475f86aecd STEP: Creating configMap with name cm-test-opt-upd-01c3a2da-e382-4345-a33f-8ebd59b59a03 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-ac15a086-c30e-4e02-876e-51475f86aecd STEP: Updating configmap cm-test-opt-upd-01c3a2da-e382-4345-a33f-8ebd59b59a03 STEP: Creating configMap with name cm-test-opt-create-f04d4ad2-dd1d-45b5-8651-3855aca34100 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:01:36.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5576" for this suite. • [SLOW TEST:101.543 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":40,"skipped":501,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:01:36.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:01:44.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-2195" for this suite. • [SLOW TEST:7.398 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":280,"completed":41,"skipped":529,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:01:44.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-7194 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-7194 STEP: Creating statefulset with conflicting port in namespace statefulset-7194 STEP: Waiting until pod test-pod will start running in namespace statefulset-7194 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-7194 Feb 14 00:01:56.438: INFO: Observed stateful pod in namespace: statefulset-7194, name: ss-0, uid: 10582851-3bfb-43a3-9229-b944713f311d, status phase: Pending. Waiting for statefulset controller to delete. Feb 14 00:02:02.311: INFO: Observed stateful pod in namespace: statefulset-7194, name: ss-0, uid: 10582851-3bfb-43a3-9229-b944713f311d, status phase: Failed. Waiting for statefulset controller to delete. Feb 14 00:02:02.336: INFO: Observed stateful pod in namespace: statefulset-7194, name: ss-0, uid: 10582851-3bfb-43a3-9229-b944713f311d, status phase: Failed. Waiting for statefulset controller to delete. Feb 14 00:02:02.351: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-7194 STEP: Removing pod with conflicting port in namespace statefulset-7194 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-7194 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 14 00:02:12.600: INFO: Deleting all statefulset in ns statefulset-7194 Feb 14 00:02:12.604: INFO: Scaling statefulset ss to 0 Feb 14 00:02:22.656: INFO: Waiting for statefulset status.replicas updated to 0 Feb 14 00:02:22.661: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:02:22.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7194" for this suite. • [SLOW TEST:38.532 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":280,"completed":42,"skipped":541,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:02:22.701: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Feb 14 00:02:22.808: INFO: Waiting up to 5m0s for pod "downwardapi-volume-37ef9627-d5b3-44ea-ba5c-f30a038e4d5f" in namespace "projected-4834" to be "success or failure" Feb 14 00:02:22.813: INFO: Pod "downwardapi-volume-37ef9627-d5b3-44ea-ba5c-f30a038e4d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.715586ms Feb 14 00:02:24.821: INFO: Pod "downwardapi-volume-37ef9627-d5b3-44ea-ba5c-f30a038e4d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013385983s Feb 14 00:02:26.831: INFO: Pod "downwardapi-volume-37ef9627-d5b3-44ea-ba5c-f30a038e4d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023572343s Feb 14 00:02:28.837: INFO: Pod "downwardapi-volume-37ef9627-d5b3-44ea-ba5c-f30a038e4d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029475524s Feb 14 00:02:30.844: INFO: Pod "downwardapi-volume-37ef9627-d5b3-44ea-ba5c-f30a038e4d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.036119807s Feb 14 00:02:32.854: INFO: Pod "downwardapi-volume-37ef9627-d5b3-44ea-ba5c-f30a038e4d5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.046248752s STEP: Saw pod success Feb 14 00:02:32.854: INFO: Pod "downwardapi-volume-37ef9627-d5b3-44ea-ba5c-f30a038e4d5f" satisfied condition "success or failure" Feb 14 00:02:32.865: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-37ef9627-d5b3-44ea-ba5c-f30a038e4d5f container client-container: STEP: delete the pod Feb 14 00:02:32.926: INFO: Waiting for pod downwardapi-volume-37ef9627-d5b3-44ea-ba5c-f30a038e4d5f to disappear Feb 14 00:02:32.931: INFO: Pod downwardapi-volume-37ef9627-d5b3-44ea-ba5c-f30a038e4d5f no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:02:32.931: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4834" for this suite. • [SLOW TEST:10.246 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":43,"skipped":559,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:02:32.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name projected-configmap-test-volume-306f6854-99ad-46f4-85f5-0ce403e451c4 STEP: Creating a pod to test consume configMaps Feb 14 00:02:33.134: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b4da8830-6f3b-421d-adf0-70149425b5d1" in namespace "projected-1867" to be "success or failure" Feb 14 00:02:33.138: INFO: Pod "pod-projected-configmaps-b4da8830-6f3b-421d-adf0-70149425b5d1": Phase="Pending", Reason="", readiness=false. Elapsed: 3.832009ms Feb 14 00:02:35.147: INFO: Pod "pod-projected-configmaps-b4da8830-6f3b-421d-adf0-70149425b5d1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013035245s Feb 14 00:02:37.151: INFO: Pod "pod-projected-configmaps-b4da8830-6f3b-421d-adf0-70149425b5d1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017291307s Feb 14 00:02:39.159: INFO: Pod "pod-projected-configmaps-b4da8830-6f3b-421d-adf0-70149425b5d1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.024820813s Feb 14 00:02:41.164: INFO: Pod "pod-projected-configmaps-b4da8830-6f3b-421d-adf0-70149425b5d1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.030308094s STEP: Saw pod success Feb 14 00:02:41.165: INFO: Pod "pod-projected-configmaps-b4da8830-6f3b-421d-adf0-70149425b5d1" satisfied condition "success or failure" Feb 14 00:02:41.168: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-b4da8830-6f3b-421d-adf0-70149425b5d1 container projected-configmap-volume-test: STEP: delete the pod Feb 14 00:02:41.226: INFO: Waiting for pod pod-projected-configmaps-b4da8830-6f3b-421d-adf0-70149425b5d1 to disappear Feb 14 00:02:41.240: INFO: Pod pod-projected-configmaps-b4da8830-6f3b-421d-adf0-70149425b5d1 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:02:41.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1867" for this suite. • [SLOW TEST:8.303 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":44,"skipped":574,"failed":0} [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:02:41.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1735 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: running the image docker.io/library/httpd:2.4.38-alpine Feb 14 00:02:41.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-264' Feb 14 00:02:44.289: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Feb 14 00:02:44.289: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1740 Feb 14 00:02:46.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-264' Feb 14 00:02:46.725: INFO: stderr: "" Feb 14 00:02:46.725: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:02:46.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-264" for this suite. • [SLOW TEST:5.487 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1731 should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":280,"completed":45,"skipped":574,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:02:46.740: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 14 00:02:47.432: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 14 00:02:49.470: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 00:02:51.476: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 00:02:53.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 00:02:55.477: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 00:02:57.494: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235367, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 14 00:03:00.512: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:03:00.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5050" for this suite. STEP: Destroying namespace "webhook-5050-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:14.162 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":280,"completed":46,"skipped":577,"failed":0} [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:03:00.903: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99 STEP: Creating service test in namespace statefulset-4178 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating statefulset ss in namespace statefulset-4178 Feb 14 00:03:01.182: INFO: Found 0 stateful pods, waiting for 1 Feb 14 00:03:11.193: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Pending - Ready=false Feb 14 00:03:21.191: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110 Feb 14 00:03:21.247: INFO: Deleting all statefulset in ns statefulset-4178 Feb 14 00:03:21.270: INFO: Scaling statefulset ss to 0 Feb 14 00:03:41.343: INFO: Waiting for statefulset status.replicas updated to 0 Feb 14 00:03:41.349: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:03:41.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4178" for this suite. • [SLOW TEST:40.518 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":280,"completed":47,"skipped":577,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:03:41.422: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should find a service from listing all namespaces [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: fetching services [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:03:41.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3827" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 •{"msg":"PASSED [sig-network] Services should find a service from listing all namespaces [Conformance]","total":280,"completed":48,"skipped":587,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:03:41.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 14 00:03:42.293: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 14 00:03:44.313: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235422, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235422, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235422, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235422, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 00:03:46.345: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235422, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235422, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235422, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235422, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 00:03:48.332: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235422, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235422, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235422, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235422, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 00:03:50.321: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235422, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235422, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235422, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235422, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 14 00:03:53.388: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API Feb 14 00:03:53.487: INFO: Waiting for webhook configuration to be ready... STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Feb 14 00:04:01.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-1686 to-be-attached-pod -i -c=container1' Feb 14 00:04:01.777: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:04:01.799: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1686" for this suite. STEP: Destroying namespace "webhook-1686-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:20.556 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":280,"completed":49,"skipped":594,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:04:02.057: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:04:02.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-2852" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":280,"completed":50,"skipped":614,"failed":0} SSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:04:02.256: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-volume-d53ef284-af85-400a-8213-ecf5b736b5d3 STEP: Creating a pod to test consume configMaps Feb 14 00:04:02.405: INFO: Waiting up to 5m0s for pod "pod-configmaps-b58ae54c-c6e2-4fa6-bddb-6523c4b7e249" in namespace "configmap-9075" to be "success or failure" Feb 14 00:04:02.409: INFO: Pod "pod-configmaps-b58ae54c-c6e2-4fa6-bddb-6523c4b7e249": Phase="Pending", Reason="", readiness=false. Elapsed: 3.453059ms Feb 14 00:04:04.418: INFO: Pod "pod-configmaps-b58ae54c-c6e2-4fa6-bddb-6523c4b7e249": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012685622s Feb 14 00:04:06.428: INFO: Pod "pod-configmaps-b58ae54c-c6e2-4fa6-bddb-6523c4b7e249": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022439647s Feb 14 00:04:08.436: INFO: Pod "pod-configmaps-b58ae54c-c6e2-4fa6-bddb-6523c4b7e249": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031240981s Feb 14 00:04:10.445: INFO: Pod "pod-configmaps-b58ae54c-c6e2-4fa6-bddb-6523c4b7e249": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039646845s Feb 14 00:04:12.455: INFO: Pod "pod-configmaps-b58ae54c-c6e2-4fa6-bddb-6523c4b7e249": Phase="Pending", Reason="", readiness=false. Elapsed: 10.049531706s Feb 14 00:04:14.465: INFO: Pod "pod-configmaps-b58ae54c-c6e2-4fa6-bddb-6523c4b7e249": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.059784961s STEP: Saw pod success Feb 14 00:04:14.465: INFO: Pod "pod-configmaps-b58ae54c-c6e2-4fa6-bddb-6523c4b7e249" satisfied condition "success or failure" Feb 14 00:04:14.469: INFO: Trying to get logs from node jerma-node pod pod-configmaps-b58ae54c-c6e2-4fa6-bddb-6523c4b7e249 container configmap-volume-test: STEP: delete the pod Feb 14 00:04:14.549: INFO: Waiting for pod pod-configmaps-b58ae54c-c6e2-4fa6-bddb-6523c4b7e249 to disappear Feb 14 00:04:14.557: INFO: Pod pod-configmaps-b58ae54c-c6e2-4fa6-bddb-6523c4b7e249 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:04:14.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9075" for this suite. • [SLOW TEST:12.315 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":51,"skipped":622,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:04:14.575: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:04:21.033: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-8125" for this suite. STEP: Destroying namespace "nsdeletetest-5908" for this suite. Feb 14 00:04:21.053: INFO: Namespace nsdeletetest-5908 was already deleted STEP: Destroying namespace "nsdeletetest-465" for this suite. • [SLOW TEST:6.487 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":280,"completed":52,"skipped":645,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:04:21.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:04:29.266: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2552" for this suite. • [SLOW TEST:8.216 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":53,"skipped":674,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:04:29.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update Feb 14 00:04:29.417: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1264 /api/v1/namespaces/watch-1264/configmaps/e2e-watch-test-resource-version a35c2fac-2c11-4806-8856-bde5eeb061df 8266021 0 2020-02-14 00:04:29 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} Feb 14 00:04:29.417: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-1264 /api/v1/namespaces/watch-1264/configmaps/e2e-watch-test-resource-version a35c2fac-2c11-4806-8856-bde5eeb061df 8266022 0 2020-02-14 00:04:29 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:04:29.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-1264" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":280,"completed":54,"skipped":718,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:04:29.430: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service nodeport-service with the type=NodePort in namespace services-7421 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-7421 STEP: creating replication controller externalsvc in namespace services-7421 I0214 00:04:29.584785 9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-7421, replica count: 2 I0214 00:04:32.636249 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0214 00:04:35.637194 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0214 00:04:38.638325 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0214 00:04:41.639580 9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Feb 14 00:04:41.688: INFO: Creating new exec pod Feb 14 00:04:47.778: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7421 execpod7r7bh -- /bin/sh -x -c nslookup nodeport-service' Feb 14 00:04:48.354: INFO: stderr: "I0214 00:04:48.089729 999 log.go:172] (0xc00094eb00) (0xc00091c280) Create stream\nI0214 00:04:48.090102 999 log.go:172] (0xc00094eb00) (0xc00091c280) Stream added, broadcasting: 1\nI0214 00:04:48.110527 999 log.go:172] (0xc00094eb00) Reply frame received for 1\nI0214 00:04:48.110741 999 log.go:172] (0xc00094eb00) (0xc0006e68c0) Create stream\nI0214 00:04:48.110760 999 log.go:172] (0xc00094eb00) (0xc0006e68c0) Stream added, broadcasting: 3\nI0214 00:04:48.112426 999 log.go:172] (0xc00094eb00) Reply frame received for 3\nI0214 00:04:48.112485 999 log.go:172] (0xc00094eb00) (0xc000543540) Create stream\nI0214 00:04:48.112499 999 log.go:172] (0xc00094eb00) (0xc000543540) Stream added, broadcasting: 5\nI0214 00:04:48.115425 999 log.go:172] (0xc00094eb00) Reply frame received for 5\nI0214 00:04:48.183325 999 log.go:172] (0xc00094eb00) Data frame received for 5\nI0214 00:04:48.183451 999 log.go:172] (0xc000543540) (5) Data frame handling\nI0214 00:04:48.183541 999 log.go:172] (0xc000543540) (5) Data frame sent\n+ nslookup nodeport-service\nI0214 00:04:48.204827 999 log.go:172] (0xc00094eb00) Data frame received for 3\nI0214 00:04:48.204909 999 log.go:172] (0xc0006e68c0) (3) Data frame handling\nI0214 00:04:48.204929 999 log.go:172] (0xc0006e68c0) (3) Data frame sent\nI0214 00:04:48.206135 999 log.go:172] (0xc00094eb00) Data frame received for 3\nI0214 00:04:48.206148 999 log.go:172] (0xc0006e68c0) (3) Data frame handling\nI0214 00:04:48.206158 999 log.go:172] (0xc0006e68c0) (3) Data frame sent\nI0214 00:04:48.335616 999 log.go:172] (0xc00094eb00) (0xc0006e68c0) Stream removed, broadcasting: 3\nI0214 00:04:48.335979 999 log.go:172] (0xc00094eb00) Data frame received for 1\nI0214 00:04:48.336226 999 log.go:172] (0xc00094eb00) (0xc000543540) Stream removed, broadcasting: 5\nI0214 00:04:48.336304 999 log.go:172] (0xc00091c280) (1) Data frame handling\nI0214 00:04:48.336353 999 log.go:172] (0xc00091c280) (1) Data frame sent\nI0214 00:04:48.336378 999 log.go:172] (0xc00094eb00) (0xc00091c280) Stream removed, broadcasting: 1\nI0214 00:04:48.336417 999 log.go:172] (0xc00094eb00) Go away received\nI0214 00:04:48.338405 999 log.go:172] (0xc00094eb00) (0xc00091c280) Stream removed, broadcasting: 1\nI0214 00:04:48.338450 999 log.go:172] (0xc00094eb00) (0xc0006e68c0) Stream removed, broadcasting: 3\nI0214 00:04:48.338476 999 log.go:172] (0xc00094eb00) (0xc000543540) Stream removed, broadcasting: 5\n" Feb 14 00:04:48.354: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-7421.svc.cluster.local\tcanonical name = externalsvc.services-7421.svc.cluster.local.\nName:\texternalsvc.services-7421.svc.cluster.local\nAddress: 10.96.51.8\n\n" STEP: deleting ReplicationController externalsvc in namespace services-7421, will wait for the garbage collector to delete the pods Feb 14 00:04:48.432: INFO: Deleting ReplicationController externalsvc took: 9.480492ms Feb 14 00:04:50.433: INFO: Terminating ReplicationController externalsvc pods took: 2.001227461s Feb 14 00:05:03.253: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:05:03.282: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7421" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695 • [SLOW TEST:33.867 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":280,"completed":55,"skipped":719,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:05:03.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 14 00:05:03.402: INFO: Creating deployment "webserver-deployment" Feb 14 00:05:03.407: INFO: Waiting for observed generation 1 Feb 14 00:05:06.348: INFO: Waiting for all required pods to come up Feb 14 00:05:06.402: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running Feb 14 00:05:44.960: INFO: Waiting for deployment "webserver-deployment" to complete Feb 14 00:05:44.971: INFO: Updating deployment "webserver-deployment" with a non-existent image Feb 14 00:05:44.979: INFO: Updating deployment webserver-deployment Feb 14 00:05:44.979: INFO: Waiting for observed generation 2 Feb 14 00:05:48.251: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 Feb 14 00:05:48.824: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 Feb 14 00:05:48.834: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 14 00:05:48.896: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 Feb 14 00:05:48.896: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 Feb 14 00:05:49.008: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas Feb 14 00:05:49.018: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas Feb 14 00:05:49.019: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 Feb 14 00:05:49.026: INFO: Updating deployment webserver-deployment Feb 14 00:05:49.026: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas Feb 14 00:05:49.937: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 Feb 14 00:05:50.462: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68 Feb 14 00:05:55.104: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-9025 /apis/apps/v1/namespaces/deployment-9025/deployments/webserver-deployment 2b0ca6b0-9a5b-44a4-9952-ec9ee8e216cb 8266516 3 2020-02-14 00:05:03 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00074d1e8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-14 00:05:49 +0000 UTC,LastTransitionTime:2020-02-14 00:05:49 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-02-14 00:05:53 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} Feb 14 00:05:58.149: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-9025 /apis/apps/v1/namespaces/deployment-9025/replicasets/webserver-deployment-c7997dcc8 7c8e524e-1380-42a1-8562-7572a6bbc35b 8266512 3 2020-02-14 00:05:44 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 2b0ca6b0-9a5b-44a4-9952-ec9ee8e216cb 0xc000fdbfb7 0xc000fdbfb8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003300028 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} Feb 14 00:05:58.149: INFO: All old ReplicaSets of Deployment "webserver-deployment": Feb 14 00:05:58.149: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-9025 /apis/apps/v1/namespaces/deployment-9025/replicasets/webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 8266500 3 2020-02-14 00:05:03 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 2b0ca6b0-9a5b-44a4-9952-ec9ee8e216cb 0xc000fdbef7 0xc000fdbef8}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000fdbf58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} Feb 14 00:06:02.100: INFO: Pod "webserver-deployment-595b5b9587-4m6bl" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-4m6bl webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-4m6bl 80109a1d-a519-428b-a4a7-fbd0e59e80ea 8266335 0 2020-02-14 00:05:03 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc002f0abd7 0xc002f0abd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-02-14 00:05:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-14 00:05:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://6def96ca8bd1e26c3e6fe947a41b4aa6c9f7ce532328fe1d052584b1a12253d4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.100: INFO: Pod "webserver-deployment-595b5b9587-7htlk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-7htlk webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-7htlk 6f850953-f41a-47cc-9e68-1def860a9945 8266517 0 2020-02-14 00:05:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc002f0ad60 0xc002f0ad61}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:51 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-14 00:05:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.101: INFO: Pod "webserver-deployment-595b5b9587-82bbn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-82bbn webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-82bbn 0c70aa9e-3734-430a-acce-b57ea278e185 8266452 0 2020-02-14 00:05:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc002f0aea7 0xc002f0aea8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.101: INFO: Pod "webserver-deployment-595b5b9587-brj5k" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-brj5k webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-brj5k 5eb3f2db-d605-444d-b4ff-761d87d51281 8266489 0 2020-02-14 00:05:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc002f0afe7 0xc002f0afe8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.102: INFO: Pod "webserver-deployment-595b5b9587-c2n48" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c2n48 webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-c2n48 54ec9100-9617-4447-a32a-741ccf525420 8266329 0 2020-02-14 00:05:03 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc002f0b107 0xc002f0b108}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-02-14 00:05:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-14 00:05:31 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://ae54305f5e68584639b03f6dcf190ec4d2382314a9e06f764586606a08d0d2ce,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.103: INFO: Pod "webserver-deployment-595b5b9587-dlmrm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-dlmrm webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-dlmrm c7fbaa5e-30b8-4af6-b91b-0052d3959056 8266332 0 2020-02-14 00:05:03 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc002f0b270 0xc002f0b271}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-02-14 00:05:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-14 00:05:33 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://56c3afbb409a89d3ceb6950596fcd5ce349721028a7155443bd04776afc639f7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.104: INFO: Pod "webserver-deployment-595b5b9587-jzt48" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jzt48 webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-jzt48 50668f71-346c-49ab-9516-8d3548f39f0d 8266337 0 2020-02-14 00:05:03 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc002f0b3d0 0xc002f0b3d1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:40 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-02-14 00:05:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-14 00:05:36 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://124c8bbc425e47de290f4d95cfde5eee0b369d5d0ee894a2863163fd8c031627,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.104: INFO: Pod "webserver-deployment-595b5b9587-khq4j" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-khq4j webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-khq4j 0f32dd68-dcb5-45b9-9bee-d4d3f0ea40d3 8266487 0 2020-02-14 00:05:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc002f0b5a0 0xc002f0b5a1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.105: INFO: Pod "webserver-deployment-595b5b9587-l6mkn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-l6mkn webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-l6mkn 529ba4c3-5f8a-4bd5-957a-59bdd7b13161 8266477 0 2020-02-14 00:05:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc002f0b6a7 0xc002f0b6a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.105: INFO: Pod "webserver-deployment-595b5b9587-lcpmd" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lcpmd webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-lcpmd d8ad3672-e6ea-4e47-b5cf-3034dea5d196 8266364 0 2020-02-14 00:05:03 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc0024a8197 0xc0024a8198}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.3,StartTime:2020-02-14 00:05:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-14 00:05:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://88b8fb0d0347b0054bbe5d1bce52fa4adf5ed4532a1d3b009e39b1c24481e300,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.106: INFO: Pod "webserver-deployment-595b5b9587-ndcvw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ndcvw webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-ndcvw ff33c91f-dca5-4e38-bf78-39c456487ba7 8266478 0 2020-02-14 00:05:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc0024a8490 0xc0024a8491}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.106: INFO: Pod "webserver-deployment-595b5b9587-nlttr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nlttr webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-nlttr a58700ff-ab2f-40ff-82a1-368eae241049 8266479 0 2020-02-14 00:05:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc0024a8687 0xc0024a8688}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.106: INFO: Pod "webserver-deployment-595b5b9587-qb9vz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-qb9vz webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-qb9vz 26f3717c-2361-4d2f-96e6-a6efeff3676a 8266494 0 2020-02-14 00:05:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc0024a88a7 0xc0024a88a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.107: INFO: Pod "webserver-deployment-595b5b9587-s4lj7" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-s4lj7 webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-s4lj7 ffa456c9-b54c-4f61-af0b-4148d7ed1712 8266528 0 2020-02-14 00:05:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc0024a8a17 0xc0024a8a18}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-14 00:05:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.107: INFO: Pod "webserver-deployment-595b5b9587-sf789" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sf789 webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-sf789 4cd7ea9a-a679-4bfb-a5e8-9f12db948267 8266480 0 2020-02-14 00:05:49 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc0024a8b97 0xc0024a8b98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-14 00:05:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.107: INFO: Pod "webserver-deployment-595b5b9587-sn6kr" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-sn6kr webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-sn6kr 2479137f-a4ae-40e2-972c-92d457d85eb1 8266496 0 2020-02-14 00:05:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc0024a8d07 0xc0024a8d08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.108: INFO: Pod "webserver-deployment-595b5b9587-tcln8" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-tcln8 webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-tcln8 bee64238-7138-4fb8-966e-3a46bd790ea6 8266343 0 2020-02-14 00:05:03 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc0024a8e37 0xc0024a8e38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:41 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.8,StartTime:2020-02-14 00:05:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-14 00:05:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://f8e64985110234a3386420ddfed39d2530bc4d3e8d52c8cf25d4c1e56f09bf96,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.108: INFO: Pod "webserver-deployment-595b5b9587-vhqzb" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vhqzb webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-vhqzb f880dd74-9422-4540-a2d3-7d1487394819 8266370 0 2020-02-14 00:05:03 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc0024a8fb0 0xc0024a8fb1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-14 00:05:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-14 00:05:43 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://e97f02a5356744a450bd7d8aea53f994526f3fb94b319cdc554c7fadc7af7500,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.109: INFO: Pod "webserver-deployment-595b5b9587-wdxzf" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wdxzf webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-wdxzf 57a2db48-63dc-49a4-be1e-456114ee39f6 8266379 0 2020-02-14 00:05:03 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc0024a9220 0xc0024a9221}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:44 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:03 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-14 00:05:03 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-14 00:05:41 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://9b21dd378761267f2b690c06a9997578e442d6ae00c81a04aab0b40ca38a34de,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.109: INFO: Pod "webserver-deployment-595b5b9587-z5s9b" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z5s9b webserver-deployment-595b5b9587- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-595b5b9587-z5s9b 7c154179-8f8a-4fe8-bfbe-3b6fb2040307 8266486 0 2020-02-14 00:05:50 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 2463a4e7-d880-4a9f-b428-ea73602c3c41 0xc0024a94a0 0xc0024a94a1}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.109: INFO: Pod "webserver-deployment-c7997dcc8-52qh8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-52qh8 webserver-deployment-c7997dcc8- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-c7997dcc8-52qh8 7e09918e-9a3b-4c7e-85a5-a8d69c415a68 8266423 0 2020-02-14 00:05:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7c8e524e-1380-42a1-8562-7572a6bbc35b 0xc0024a95c7 0xc0024a95c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-14 00:05:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.110: INFO: Pod "webserver-deployment-c7997dcc8-68f2s" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-68f2s webserver-deployment-c7997dcc8- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-c7997dcc8-68f2s d2c5ce37-fad6-4a71-a50a-40dd86e1fa1f 8266409 0 2020-02-14 00:05:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7c8e524e-1380-42a1-8562-7572a6bbc35b 0xc0024a9887 0xc0024a9888}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-14 00:05:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.111: INFO: Pod "webserver-deployment-c7997dcc8-785xs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-785xs webserver-deployment-c7997dcc8- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-c7997dcc8-785xs f8f3651b-2da0-4227-95f3-0e273780775e 8266460 0 2020-02-14 00:05:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7c8e524e-1380-42a1-8562-7572a6bbc35b 0xc0024a9ab7 0xc0024a9ab8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.111: INFO: Pod "webserver-deployment-c7997dcc8-7qvls" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7qvls webserver-deployment-c7997dcc8- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-c7997dcc8-7qvls 933a6229-5a15-47df-8d13-1ce5fd84b6cf 8266432 0 2020-02-14 00:05:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7c8e524e-1380-42a1-8562-7572a6bbc35b 0xc0024a9d57 0xc0024a9d58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-14 00:05:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.112: INFO: Pod "webserver-deployment-c7997dcc8-7rgcd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-7rgcd webserver-deployment-c7997dcc8- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-c7997dcc8-7rgcd d6bcd78a-d1c7-43df-ba38-622ffe302407 8266510 0 2020-02-14 00:05:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7c8e524e-1380-42a1-8562-7572a6bbc35b 0xc0024a9ef7 0xc0024a9ef8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.112: INFO: Pod "webserver-deployment-c7997dcc8-8nrpl" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8nrpl webserver-deployment-c7997dcc8- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-c7997dcc8-8nrpl 89b25d1a-ba7a-4a48-9465-affce01bc78f 8266482 0 2020-02-14 00:05:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7c8e524e-1380-42a1-8562-7572a6bbc35b 0xc0022de0b7 0xc0022de0b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.113: INFO: Pod "webserver-deployment-c7997dcc8-g267t" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-g267t webserver-deployment-c7997dcc8- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-c7997dcc8-g267t 857379b1-4030-4de1-93ae-c3623df3a5fe 8266523 0 2020-02-14 00:05:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7c8e524e-1380-42a1-8562-7572a6bbc35b 0xc0022de1e7 0xc0022de1e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-14 00:05:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.113: INFO: Pod "webserver-deployment-c7997dcc8-gf822" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gf822 webserver-deployment-c7997dcc8- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-c7997dcc8-gf822 9678db44-14b1-4233-8a30-11729eb420ee 8266485 0 2020-02-14 00:05:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7c8e524e-1380-42a1-8562-7572a6bbc35b 0xc0022de457 0xc0022de458}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.113: INFO: Pod "webserver-deployment-c7997dcc8-gq8s2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-gq8s2 webserver-deployment-c7997dcc8- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-c7997dcc8-gq8s2 0f141ba4-3dc9-4129-8b9e-1ce6c52978f3 8266483 0 2020-02-14 00:05:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7c8e524e-1380-42a1-8562-7572a6bbc35b 0xc0022de687 0xc0022de688}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.114: INFO: Pod "webserver-deployment-c7997dcc8-hl7jn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hl7jn webserver-deployment-c7997dcc8- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-c7997dcc8-hl7jn 8d35317b-f8ff-4db9-9e8c-ad5b69192e0d 8266413 0 2020-02-14 00:05:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7c8e524e-1380-42a1-8562-7572a6bbc35b 0xc0022de887 0xc0022de888}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-14 00:05:45 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.114: INFO: Pod "webserver-deployment-c7997dcc8-q2d4k" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-q2d4k webserver-deployment-c7997dcc8- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-c7997dcc8-q2d4k b83a1a5f-c423-4f03-9f60-bb04d92dffa6 8266481 0 2020-02-14 00:05:50 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7c8e524e-1380-42a1-8562-7572a6bbc35b 0xc0022df3d7 0xc0022df3d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.114: INFO: Pod "webserver-deployment-c7997dcc8-v4qz9" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v4qz9 webserver-deployment-c7997dcc8- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-c7997dcc8-v4qz9 fa2fab08-cc7d-4f65-a871-9175393a118f 8266498 0 2020-02-14 00:05:49 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7c8e524e-1380-42a1-8562-7572a6bbc35b 0xc0022df877 0xc0022df878}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:50 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:49 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-02-14 00:05:50 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} Feb 14 00:06:02.115: INFO: Pod "webserver-deployment-c7997dcc8-x7jr2" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x7jr2 webserver-deployment-c7997dcc8- deployment-9025 /api/v1/namespaces/deployment-9025/pods/webserver-deployment-c7997dcc8-x7jr2 a2d3d58c-d0a8-4b74-8ef5-855204172a5c 8266433 0 2020-02-14 00:05:45 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 7c8e524e-1380-42a1-8562-7572a6bbc35b 0xc0022dfea7 0xc0022dfea8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-42nkh,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-42nkh,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-42nkh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:05:45 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-14 00:05:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:06:02.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-9025" for this suite. • [SLOW TEST:61.423 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":280,"completed":56,"skipped":743,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:06:04.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 14 00:06:09.403: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Feb 14 00:06:13.239: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1010 create -f -' Feb 14 00:06:24.046: INFO: stderr: "" Feb 14 00:06:24.047: INFO: stdout: "e2e-test-crd-publish-openapi-8478-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 14 00:06:24.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1010 delete e2e-test-crd-publish-openapi-8478-crds test-cr' Feb 14 00:06:25.285: INFO: stderr: "" Feb 14 00:06:25.285: INFO: stdout: "e2e-test-crd-publish-openapi-8478-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Feb 14 00:06:25.286: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1010 apply -f -' Feb 14 00:06:27.434: INFO: stderr: "" Feb 14 00:06:27.434: INFO: stdout: "e2e-test-crd-publish-openapi-8478-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Feb 14 00:06:27.435: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1010 delete e2e-test-crd-publish-openapi-8478-crds test-cr' Feb 14 00:06:27.894: INFO: stderr: "" Feb 14 00:06:27.894: INFO: stdout: "e2e-test-crd-publish-openapi-8478-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Feb 14 00:06:27.894: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8478-crds' Feb 14 00:06:28.340: INFO: stderr: "" Feb 14 00:06:28.340: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8478-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:06:38.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1010" for this suite. • [SLOW TEST:34.746 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":280,"completed":57,"skipped":766,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:06:39.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating projection with configMap that has name projected-configmap-test-upd-97382fcb-1e2b-4fe9-9cc9-7a1df942fdd9 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-97382fcb-1e2b-4fe9-9cc9-7a1df942fdd9 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:07:15.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1906" for this suite. • [SLOW TEST:36.127 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":58,"skipped":769,"failed":0} SSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:07:15.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7085.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-7085.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7085.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-7085.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-7085.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7085.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Feb 14 00:07:29.910: INFO: DNS probes using dns-7085/dns-test-82a9b230-eea4-44a5-8c5f-2579c00fca17 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:07:30.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7085" for this suite. • [SLOW TEST:14.500 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":280,"completed":59,"skipped":775,"failed":0} SSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:07:30.100: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Feb 14 00:07:30.826: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Feb 14 00:07:32.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 00:07:34.863: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 00:07:36.860: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 00:07:38.862: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Feb 14 00:07:40.864: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235650, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Feb 14 00:07:43.919: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Feb 14 00:07:44.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5707" for this suite. STEP: Destroying namespace "webhook-5707-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:14.175 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":280,"completed":60,"skipped":778,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Feb 14 00:07:44.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Feb 14 00:07:44.452: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/:
alternatives.log
alternatives.l... (200; 16.046325ms)
Feb 14 00:07:44.456: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.917874ms)
Feb 14 00:07:44.460: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.507343ms)
Feb 14 00:07:44.464: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.908766ms)
Feb 14 00:07:44.469: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.036393ms)
Feb 14 00:07:44.480: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.040881ms)
Feb 14 00:07:44.485: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.583302ms)
Feb 14 00:07:44.493: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.36305ms)
Feb 14 00:07:44.508: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 14.923062ms)
Feb 14 00:07:44.625: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 116.53554ms)
Feb 14 00:07:44.634: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.27227ms)
Feb 14 00:07:44.651: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 15.832143ms)
Feb 14 00:07:44.665: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 13.702219ms)
Feb 14 00:07:44.676: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.447371ms)
Feb 14 00:07:44.689: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.279757ms)
Feb 14 00:07:44.739: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 49.432385ms)
Feb 14 00:07:44.745: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.235352ms)
Feb 14 00:07:44.750: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.086955ms)
Feb 14 00:07:44.755: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.10714ms)
Feb 14 00:07:44.759: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.092109ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:07:44.759: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3293" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":280,"completed":61,"skipped":788,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:07:44.775: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Feb 14 00:07:44.930: INFO: Created pod &Pod{ObjectMeta:{dns-4931  dns-4931 /api/v1/namespaces/dns-4931/pods/dns-4931 abccc7aa-e24c-451a-8ee6-f5ec2cc163c6 8267059 0 2020-02-14 00:07:44 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-jxtlw,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-jxtlw,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-jxtlw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Feb 14 00:07:44.949: INFO: The status of Pod dns-4931 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:07:46.960: INFO: The status of Pod dns-4931 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:07:48.961: INFO: The status of Pod dns-4931 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:07:50.957: INFO: The status of Pod dns-4931 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:07:52.957: INFO: The status of Pod dns-4931 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:07:54.955: INFO: The status of Pod dns-4931 is Running (Ready = true)
STEP: Verifying customized DNS suffix list is configured on pod...
Feb 14 00:07:54.956: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-4931 PodName:dns-4931 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 00:07:54.956: INFO: >>> kubeConfig: /root/.kube/config
I0214 00:07:55.005175       9 log.go:172] (0xc002edc420) (0xc002b11180) Create stream
I0214 00:07:55.005407       9 log.go:172] (0xc002edc420) (0xc002b11180) Stream added, broadcasting: 1
I0214 00:07:55.013218       9 log.go:172] (0xc002edc420) Reply frame received for 1
I0214 00:07:55.013288       9 log.go:172] (0xc002edc420) (0xc001ea1b80) Create stream
I0214 00:07:55.013304       9 log.go:172] (0xc002edc420) (0xc001ea1b80) Stream added, broadcasting: 3
I0214 00:07:55.019168       9 log.go:172] (0xc002edc420) Reply frame received for 3
I0214 00:07:55.019330       9 log.go:172] (0xc002edc420) (0xc002b11220) Create stream
I0214 00:07:55.019361       9 log.go:172] (0xc002edc420) (0xc002b11220) Stream added, broadcasting: 5
I0214 00:07:55.022869       9 log.go:172] (0xc002edc420) Reply frame received for 5
I0214 00:07:55.140948       9 log.go:172] (0xc002edc420) Data frame received for 3
I0214 00:07:55.141350       9 log.go:172] (0xc001ea1b80) (3) Data frame handling
I0214 00:07:55.141594       9 log.go:172] (0xc001ea1b80) (3) Data frame sent
I0214 00:07:55.207994       9 log.go:172] (0xc002edc420) (0xc001ea1b80) Stream removed, broadcasting: 3
I0214 00:07:55.208081       9 log.go:172] (0xc002edc420) Data frame received for 1
I0214 00:07:55.208105       9 log.go:172] (0xc002b11180) (1) Data frame handling
I0214 00:07:55.208131       9 log.go:172] (0xc002b11180) (1) Data frame sent
I0214 00:07:55.208153       9 log.go:172] (0xc002edc420) (0xc002b11180) Stream removed, broadcasting: 1
I0214 00:07:55.208580       9 log.go:172] (0xc002edc420) (0xc002b11220) Stream removed, broadcasting: 5
I0214 00:07:55.208755       9 log.go:172] (0xc002edc420) Go away received
I0214 00:07:55.209814       9 log.go:172] (0xc002edc420) (0xc002b11180) Stream removed, broadcasting: 1
I0214 00:07:55.209849       9 log.go:172] (0xc002edc420) (0xc001ea1b80) Stream removed, broadcasting: 3
I0214 00:07:55.209868       9 log.go:172] (0xc002edc420) (0xc002b11220) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Feb 14 00:07:55.209: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-4931 PodName:dns-4931 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 00:07:55.210: INFO: >>> kubeConfig: /root/.kube/config
I0214 00:07:55.257623       9 log.go:172] (0xc002b85c30) (0xc002396000) Create stream
I0214 00:07:55.257703       9 log.go:172] (0xc002b85c30) (0xc002396000) Stream added, broadcasting: 1
I0214 00:07:55.261905       9 log.go:172] (0xc002b85c30) Reply frame received for 1
I0214 00:07:55.261940       9 log.go:172] (0xc002b85c30) (0xc0023960a0) Create stream
I0214 00:07:55.261952       9 log.go:172] (0xc002b85c30) (0xc0023960a0) Stream added, broadcasting: 3
I0214 00:07:55.264305       9 log.go:172] (0xc002b85c30) Reply frame received for 3
I0214 00:07:55.264474       9 log.go:172] (0xc002b85c30) (0xc000b73680) Create stream
I0214 00:07:55.264494       9 log.go:172] (0xc002b85c30) (0xc000b73680) Stream added, broadcasting: 5
I0214 00:07:55.266610       9 log.go:172] (0xc002b85c30) Reply frame received for 5
I0214 00:07:55.363219       9 log.go:172] (0xc002b85c30) Data frame received for 3
I0214 00:07:55.363477       9 log.go:172] (0xc0023960a0) (3) Data frame handling
I0214 00:07:55.363541       9 log.go:172] (0xc0023960a0) (3) Data frame sent
I0214 00:07:55.461215       9 log.go:172] (0xc002b85c30) Data frame received for 1
I0214 00:07:55.462280       9 log.go:172] (0xc002396000) (1) Data frame handling
I0214 00:07:55.462463       9 log.go:172] (0xc002396000) (1) Data frame sent
I0214 00:07:55.463875       9 log.go:172] (0xc002b85c30) (0xc0023960a0) Stream removed, broadcasting: 3
I0214 00:07:55.464175       9 log.go:172] (0xc002b85c30) (0xc002396000) Stream removed, broadcasting: 1
I0214 00:07:55.465906       9 log.go:172] (0xc002b85c30) (0xc000b73680) Stream removed, broadcasting: 5
I0214 00:07:55.466776       9 log.go:172] (0xc002b85c30) (0xc002396000) Stream removed, broadcasting: 1
I0214 00:07:55.466931       9 log.go:172] (0xc002b85c30) (0xc0023960a0) Stream removed, broadcasting: 3
I0214 00:07:55.467055       9 log.go:172] (0xc002b85c30) (0xc000b73680) Stream removed, broadcasting: 5
I0214 00:07:55.467209       9 log.go:172] (0xc002b85c30) Go away received
Feb 14 00:07:55.467: INFO: Deleting pod dns-4931...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:07:55.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-4931" for this suite.

• [SLOW TEST:11.249 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":280,"completed":62,"skipped":798,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:07:56.026: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:08:13.835: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8593" for this suite.

• [SLOW TEST:17.832 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":280,"completed":63,"skipped":820,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:08:13.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-727705ae-c9cc-4571-89ac-d4d0f6435a40
STEP: Creating a pod to test consume secrets
Feb 14 00:08:14.087: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f70d4f79-6812-4016-a527-d9f785823e30" in namespace "projected-5256" to be "success or failure"
Feb 14 00:08:14.220: INFO: Pod "pod-projected-secrets-f70d4f79-6812-4016-a527-d9f785823e30": Phase="Pending", Reason="", readiness=false. Elapsed: 132.668961ms
Feb 14 00:08:16.235: INFO: Pod "pod-projected-secrets-f70d4f79-6812-4016-a527-d9f785823e30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.147801898s
Feb 14 00:08:18.244: INFO: Pod "pod-projected-secrets-f70d4f79-6812-4016-a527-d9f785823e30": Phase="Pending", Reason="", readiness=false. Elapsed: 4.157064048s
Feb 14 00:08:20.257: INFO: Pod "pod-projected-secrets-f70d4f79-6812-4016-a527-d9f785823e30": Phase="Pending", Reason="", readiness=false. Elapsed: 6.169810198s
Feb 14 00:08:22.264: INFO: Pod "pod-projected-secrets-f70d4f79-6812-4016-a527-d9f785823e30": Phase="Pending", Reason="", readiness=false. Elapsed: 8.177096738s
Feb 14 00:08:24.270: INFO: Pod "pod-projected-secrets-f70d4f79-6812-4016-a527-d9f785823e30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.18312438s
STEP: Saw pod success
Feb 14 00:08:24.271: INFO: Pod "pod-projected-secrets-f70d4f79-6812-4016-a527-d9f785823e30" satisfied condition "success or failure"
Feb 14 00:08:24.274: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-f70d4f79-6812-4016-a527-d9f785823e30 container projected-secret-volume-test: 
STEP: delete the pod
Feb 14 00:08:24.408: INFO: Waiting for pod pod-projected-secrets-f70d4f79-6812-4016-a527-d9f785823e30 to disappear
Feb 14 00:08:24.419: INFO: Pod pod-projected-secrets-f70d4f79-6812-4016-a527-d9f785823e30 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:08:24.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5256" for this suite.

• [SLOW TEST:10.573 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":64,"skipped":856,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:08:24.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override arguments
Feb 14 00:08:24.666: INFO: Waiting up to 5m0s for pod "client-containers-2a2bf36b-fdfd-439a-91ad-eb2a92449395" in namespace "containers-1937" to be "success or failure"
Feb 14 00:08:24.673: INFO: Pod "client-containers-2a2bf36b-fdfd-439a-91ad-eb2a92449395": Phase="Pending", Reason="", readiness=false. Elapsed: 6.814578ms
Feb 14 00:08:26.680: INFO: Pod "client-containers-2a2bf36b-fdfd-439a-91ad-eb2a92449395": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014539634s
Feb 14 00:08:28.767: INFO: Pod "client-containers-2a2bf36b-fdfd-439a-91ad-eb2a92449395": Phase="Pending", Reason="", readiness=false. Elapsed: 4.100839506s
Feb 14 00:08:30.786: INFO: Pod "client-containers-2a2bf36b-fdfd-439a-91ad-eb2a92449395": Phase="Pending", Reason="", readiness=false. Elapsed: 6.120551973s
Feb 14 00:08:32.832: INFO: Pod "client-containers-2a2bf36b-fdfd-439a-91ad-eb2a92449395": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.166341284s
STEP: Saw pod success
Feb 14 00:08:32.833: INFO: Pod "client-containers-2a2bf36b-fdfd-439a-91ad-eb2a92449395" satisfied condition "success or failure"
Feb 14 00:08:32.839: INFO: Trying to get logs from node jerma-node pod client-containers-2a2bf36b-fdfd-439a-91ad-eb2a92449395 container test-container: 
STEP: delete the pod
Feb 14 00:08:32.895: INFO: Waiting for pod client-containers-2a2bf36b-fdfd-439a-91ad-eb2a92449395 to disappear
Feb 14 00:08:32.994: INFO: Pod client-containers-2a2bf36b-fdfd-439a-91ad-eb2a92449395 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:08:32.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1937" for this suite.

• [SLOW TEST:8.579 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":280,"completed":65,"skipped":870,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:08:33.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 14 00:08:33.188: INFO: Waiting up to 5m0s for pod "pod-3d017429-9875-4b3e-960f-5d1d95abad52" in namespace "emptydir-9266" to be "success or failure"
Feb 14 00:08:33.201: INFO: Pod "pod-3d017429-9875-4b3e-960f-5d1d95abad52": Phase="Pending", Reason="", readiness=false. Elapsed: 12.829725ms
Feb 14 00:08:35.208: INFO: Pod "pod-3d017429-9875-4b3e-960f-5d1d95abad52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019850404s
Feb 14 00:08:37.219: INFO: Pod "pod-3d017429-9875-4b3e-960f-5d1d95abad52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031420238s
Feb 14 00:08:39.241: INFO: Pod "pod-3d017429-9875-4b3e-960f-5d1d95abad52": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053225628s
Feb 14 00:08:41.251: INFO: Pod "pod-3d017429-9875-4b3e-960f-5d1d95abad52": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063149352s
Feb 14 00:08:43.282: INFO: Pod "pod-3d017429-9875-4b3e-960f-5d1d95abad52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09427269s
STEP: Saw pod success
Feb 14 00:08:43.282: INFO: Pod "pod-3d017429-9875-4b3e-960f-5d1d95abad52" satisfied condition "success or failure"
Feb 14 00:08:43.287: INFO: Trying to get logs from node jerma-node pod pod-3d017429-9875-4b3e-960f-5d1d95abad52 container test-container: 
STEP: delete the pod
Feb 14 00:08:43.374: INFO: Waiting for pod pod-3d017429-9875-4b3e-960f-5d1d95abad52 to disappear
Feb 14 00:08:43.431: INFO: Pod pod-3d017429-9875-4b3e-960f-5d1d95abad52 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:08:43.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9266" for this suite.

• [SLOW TEST:10.445 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":66,"skipped":888,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:08:43.466: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 14 00:08:43.773: INFO: Waiting up to 5m0s for pod "pod-06089338-243c-48cb-bf57-085441cbe0ce" in namespace "emptydir-5675" to be "success or failure"
Feb 14 00:08:43.837: INFO: Pod "pod-06089338-243c-48cb-bf57-085441cbe0ce": Phase="Pending", Reason="", readiness=false. Elapsed: 64.410374ms
Feb 14 00:08:45.850: INFO: Pod "pod-06089338-243c-48cb-bf57-085441cbe0ce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.076880182s
Feb 14 00:08:47.860: INFO: Pod "pod-06089338-243c-48cb-bf57-085441cbe0ce": Phase="Pending", Reason="", readiness=false. Elapsed: 4.087112343s
Feb 14 00:08:49.868: INFO: Pod "pod-06089338-243c-48cb-bf57-085441cbe0ce": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095687288s
Feb 14 00:08:51.879: INFO: Pod "pod-06089338-243c-48cb-bf57-085441cbe0ce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.106438627s
STEP: Saw pod success
Feb 14 00:08:51.880: INFO: Pod "pod-06089338-243c-48cb-bf57-085441cbe0ce" satisfied condition "success or failure"
Feb 14 00:08:51.885: INFO: Trying to get logs from node jerma-node pod pod-06089338-243c-48cb-bf57-085441cbe0ce container test-container: 
STEP: delete the pod
Feb 14 00:08:52.069: INFO: Waiting for pod pod-06089338-243c-48cb-bf57-085441cbe0ce to disappear
Feb 14 00:08:52.126: INFO: Pod pod-06089338-243c-48cb-bf57-085441cbe0ce no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:08:52.127: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5675" for this suite.

• [SLOW TEST:8.679 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":67,"skipped":918,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:08:52.147: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-e75f7258-adb8-4afb-8b55-1daf4e02d082
STEP: Creating a pod to test consume secrets
Feb 14 00:08:52.276: INFO: Waiting up to 5m0s for pod "pod-secrets-be13d67a-dc41-419f-9a4b-95c3b93f7e05" in namespace "secrets-4925" to be "success or failure"
Feb 14 00:08:52.284: INFO: Pod "pod-secrets-be13d67a-dc41-419f-9a4b-95c3b93f7e05": Phase="Pending", Reason="", readiness=false. Elapsed: 7.115287ms
Feb 14 00:08:54.292: INFO: Pod "pod-secrets-be13d67a-dc41-419f-9a4b-95c3b93f7e05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015955382s
Feb 14 00:08:56.307: INFO: Pod "pod-secrets-be13d67a-dc41-419f-9a4b-95c3b93f7e05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030574355s
Feb 14 00:08:58.318: INFO: Pod "pod-secrets-be13d67a-dc41-419f-9a4b-95c3b93f7e05": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04149611s
Feb 14 00:09:00.329: INFO: Pod "pod-secrets-be13d67a-dc41-419f-9a4b-95c3b93f7e05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.052845276s
STEP: Saw pod success
Feb 14 00:09:00.330: INFO: Pod "pod-secrets-be13d67a-dc41-419f-9a4b-95c3b93f7e05" satisfied condition "success or failure"
Feb 14 00:09:00.338: INFO: Trying to get logs from node jerma-node pod pod-secrets-be13d67a-dc41-419f-9a4b-95c3b93f7e05 container secret-env-test: 
STEP: delete the pod
Feb 14 00:09:00.421: INFO: Waiting for pod pod-secrets-be13d67a-dc41-419f-9a4b-95c3b93f7e05 to disappear
Feb 14 00:09:00.426: INFO: Pod pod-secrets-be13d67a-dc41-419f-9a4b-95c3b93f7e05 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:09:00.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4925" for this suite.

• [SLOW TEST:8.344 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":280,"completed":68,"skipped":939,"failed":0}
SS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:09:00.492: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 00:09:00.647: INFO: Pod name rollover-pod: Found 0 pods out of 1
Feb 14 00:09:05.809: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 14 00:09:07.826: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Feb 14 00:09:09.833: INFO: Creating deployment "test-rollover-deployment"
Feb 14 00:09:09.854: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Feb 14 00:09:11.885: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Feb 14 00:09:11.902: INFO: Ensure that both replica sets have 1 created replica
Feb 14 00:09:11.910: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Feb 14 00:09:11.924: INFO: Updating deployment test-rollover-deployment
Feb 14 00:09:11.924: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Feb 14 00:09:13.974: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Feb 14 00:09:13.986: INFO: Make sure deployment "test-rollover-deployment" is complete
Feb 14 00:09:13.996: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 00:09:13.996: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235752, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:09:16.011: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 00:09:16.011: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235752, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:09:18.012: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 00:09:18.012: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235752, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:09:20.007: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 00:09:20.007: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235759, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:09:22.016: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 00:09:22.017: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235759, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:09:24.017: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 00:09:24.018: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235759, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:09:26.015: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 00:09:26.015: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235759, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:09:28.009: INFO: all replica sets need to contain the pod-template-hash label
Feb 14 00:09:28.010: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235759, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717235749, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:09:30.010: INFO: 
Feb 14 00:09:30.010: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 14 00:09:30.032: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-7676 /apis/apps/v1/namespaces/deployment-7676/deployments/test-rollover-deployment b130ed48-754b-4a71-bc63-874f0749f7f7 8267568 2 2020-02-14 00:09:09 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000aa4a28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-14 00:09:09 +0000 UTC,LastTransitionTime:2020-02-14 00:09:09 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-02-14 00:09:29 +0000 UTC,LastTransitionTime:2020-02-14 00:09:09 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 14 00:09:30.037: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-7676 /apis/apps/v1/namespaces/deployment-7676/replicasets/test-rollover-deployment-574d6dfbff 9e3f2933-fee1-4575-a4d3-f1bfdf3d260f 8267558 2 2020-02-14 00:09:11 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment b130ed48-754b-4a71-bc63-874f0749f7f7 0xc000aa4ea7 0xc000aa4ea8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000aa4f18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 14 00:09:30.037: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Feb 14 00:09:30.037: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-7676 /apis/apps/v1/namespaces/deployment-7676/replicasets/test-rollover-controller daaf6e5c-2220-4c80-ac0b-a750892cd025 8267567 2 2020-02-14 00:09:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment b130ed48-754b-4a71-bc63-874f0749f7f7 0xc000aa4dd7 0xc000aa4dd8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000aa4e38  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 14 00:09:30.037: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-7676 /apis/apps/v1/namespaces/deployment-7676/replicasets/test-rollover-deployment-f6c94f66c 934c70a2-8715-47e5-9e30-8952eebfbc62 8267506 2 2020-02-14 00:09:09 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment b130ed48-754b-4a71-bc63-874f0749f7f7 0xc000aa4f80 0xc000aa4f81}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000aa4ff8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 14 00:09:30.044: INFO: Pod "test-rollover-deployment-574d6dfbff-7mkbk" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-7mkbk test-rollover-deployment-574d6dfbff- deployment-7676 /api/v1/namespaces/deployment-7676/pods/test-rollover-deployment-574d6dfbff-7mkbk 07ad584e-3f18-4b4c-8b95-76cd8a7bb580 8267532 0 2020-02-14 00:09:12 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 9e3f2933-fee1-4575-a4d3-f1bfdf3d260f 0xc0010caa37 0xc0010caa38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xsd2p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xsd2p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xsd2p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:09:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:09:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:09:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:09:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-14 00:09:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-14 00:09:18 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://7bb1a9837c8d0d0a7388505793cb07cf1535669e89f5ba80b04f7873dd4395a9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:09:30.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7676" for this suite.

• [SLOW TEST:29.565 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":280,"completed":69,"skipped":941,"failed":0}
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:09:30.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 14 00:09:40.758: INFO: Successfully updated pod "labelsupdate6d2949fb-e4b3-47f3-9220-b38feba972ae"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:09:42.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8220" for this suite.

• [SLOW TEST:12.794 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":70,"skipped":941,"failed":0}
SS
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:09:42.853: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap configmap-6/configmap-test-0a562d71-dca9-4b3a-8b57-0701b9385727
STEP: Creating a pod to test consume configMaps
Feb 14 00:09:42.967: INFO: Waiting up to 5m0s for pod "pod-configmaps-26403a37-f15c-46b9-a16f-4b0cc11c5de8" in namespace "configmap-6" to be "success or failure"
Feb 14 00:09:42.982: INFO: Pod "pod-configmaps-26403a37-f15c-46b9-a16f-4b0cc11c5de8": Phase="Pending", Reason="", readiness=false. Elapsed: 15.630338ms
Feb 14 00:09:44.991: INFO: Pod "pod-configmaps-26403a37-f15c-46b9-a16f-4b0cc11c5de8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024422245s
Feb 14 00:09:46.999: INFO: Pod "pod-configmaps-26403a37-f15c-46b9-a16f-4b0cc11c5de8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032427817s
Feb 14 00:09:49.007: INFO: Pod "pod-configmaps-26403a37-f15c-46b9-a16f-4b0cc11c5de8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040302325s
Feb 14 00:09:51.016: INFO: Pod "pod-configmaps-26403a37-f15c-46b9-a16f-4b0cc11c5de8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.048721341s
Feb 14 00:09:53.027: INFO: Pod "pod-configmaps-26403a37-f15c-46b9-a16f-4b0cc11c5de8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060086384s
STEP: Saw pod success
Feb 14 00:09:53.027: INFO: Pod "pod-configmaps-26403a37-f15c-46b9-a16f-4b0cc11c5de8" satisfied condition "success or failure"
Feb 14 00:09:53.032: INFO: Trying to get logs from node jerma-node pod pod-configmaps-26403a37-f15c-46b9-a16f-4b0cc11c5de8 container env-test: 
STEP: delete the pod
Feb 14 00:09:53.302: INFO: Waiting for pod pod-configmaps-26403a37-f15c-46b9-a16f-4b0cc11c5de8 to disappear
Feb 14 00:09:53.313: INFO: Pod pod-configmaps-26403a37-f15c-46b9-a16f-4b0cc11c5de8 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:09:53.313: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6" for this suite.

• [SLOW TEST:10.505 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":71,"skipped":943,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:09:53.360: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3495.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3495.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3495.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3495.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3495.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3495.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3495.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3495.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3495.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3495.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 14 00:10:05.736: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:05.763: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:05.781: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:05.806: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:05.849: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:05.883: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:05.893: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:05.901: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:05.915: INFO: Lookups using dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3495.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3495.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local jessie_udp@dns-test-service-2.dns-3495.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3495.svc.cluster.local]

Feb 14 00:10:10.922: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:10.932: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:10.936: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:10.940: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:10.954: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:10.957: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:10.959: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:10.961: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:10.966: INFO: Lookups using dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3495.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3495.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local jessie_udp@dns-test-service-2.dns-3495.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3495.svc.cluster.local]

Feb 14 00:10:15.925: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:15.931: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:15.936: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:15.941: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:15.953: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:15.957: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:15.960: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:15.966: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:15.980: INFO: Lookups using dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3495.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3495.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local jessie_udp@dns-test-service-2.dns-3495.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3495.svc.cluster.local]

Feb 14 00:10:20.926: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:20.933: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:20.938: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:20.954: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:20.972: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:20.976: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:20.981: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:20.985: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:20.996: INFO: Lookups using dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3495.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3495.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local jessie_udp@dns-test-service-2.dns-3495.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3495.svc.cluster.local]

Feb 14 00:10:25.923: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:25.927: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:25.931: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:25.935: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:25.952: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:25.955: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:25.958: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:25.962: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:25.968: INFO: Lookups using dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3495.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3495.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local jessie_udp@dns-test-service-2.dns-3495.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3495.svc.cluster.local]

Feb 14 00:10:30.930: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:30.942: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:30.950: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:30.957: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:30.973: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:30.978: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:30.982: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:30.986: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3495.svc.cluster.local from pod dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8: the server could not find the requested resource (get pods dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8)
Feb 14 00:10:31.007: INFO: Lookups using dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3495.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3495.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3495.svc.cluster.local jessie_udp@dns-test-service-2.dns-3495.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3495.svc.cluster.local]

Feb 14 00:10:35.993: INFO: DNS probes using dns-3495/dns-test-528f9233-0c4f-486f-ba92-c0f2e8dcb1f8 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:10:36.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3495" for this suite.

• [SLOW TEST:42.791 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":280,"completed":72,"skipped":953,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:10:36.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test substitution in container's command
Feb 14 00:10:37.219: INFO: Waiting up to 5m0s for pod "var-expansion-3453b368-5c90-479e-85f7-bd449c0edfea" in namespace "var-expansion-51" to be "success or failure"
Feb 14 00:10:37.226: INFO: Pod "var-expansion-3453b368-5c90-479e-85f7-bd449c0edfea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.484619ms
Feb 14 00:10:39.249: INFO: Pod "var-expansion-3453b368-5c90-479e-85f7-bd449c0edfea": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029342999s
Feb 14 00:10:41.260: INFO: Pod "var-expansion-3453b368-5c90-479e-85f7-bd449c0edfea": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04073343s
Feb 14 00:10:43.268: INFO: Pod "var-expansion-3453b368-5c90-479e-85f7-bd449c0edfea": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049041483s
Feb 14 00:10:45.276: INFO: Pod "var-expansion-3453b368-5c90-479e-85f7-bd449c0edfea": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057273283s
Feb 14 00:10:47.286: INFO: Pod "var-expansion-3453b368-5c90-479e-85f7-bd449c0edfea": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.066930036s
STEP: Saw pod success
Feb 14 00:10:47.286: INFO: Pod "var-expansion-3453b368-5c90-479e-85f7-bd449c0edfea" satisfied condition "success or failure"
Feb 14 00:10:47.291: INFO: Trying to get logs from node jerma-node pod var-expansion-3453b368-5c90-479e-85f7-bd449c0edfea container dapi-container: 
STEP: delete the pod
Feb 14 00:10:47.408: INFO: Waiting for pod var-expansion-3453b368-5c90-479e-85f7-bd449c0edfea to disappear
Feb 14 00:10:47.417: INFO: Pod var-expansion-3453b368-5c90-479e-85f7-bd449c0edfea no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:10:47.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-51" for this suite.

• [SLOW TEST:11.285 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":280,"completed":73,"skipped":988,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:10:47.440: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-6aae6121-efab-4533-a8fd-44f703a91256
STEP: Creating a pod to test consume secrets
Feb 14 00:10:47.730: INFO: Waiting up to 5m0s for pod "pod-secrets-3615ad3c-9e9f-4b74-96b3-6cbf0e6434ab" in namespace "secrets-3594" to be "success or failure"
Feb 14 00:10:47.786: INFO: Pod "pod-secrets-3615ad3c-9e9f-4b74-96b3-6cbf0e6434ab": Phase="Pending", Reason="", readiness=false. Elapsed: 56.025852ms
Feb 14 00:10:49.816: INFO: Pod "pod-secrets-3615ad3c-9e9f-4b74-96b3-6cbf0e6434ab": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086131993s
Feb 14 00:10:51.883: INFO: Pod "pod-secrets-3615ad3c-9e9f-4b74-96b3-6cbf0e6434ab": Phase="Pending", Reason="", readiness=false. Elapsed: 4.152251047s
Feb 14 00:10:53.901: INFO: Pod "pod-secrets-3615ad3c-9e9f-4b74-96b3-6cbf0e6434ab": Phase="Pending", Reason="", readiness=false. Elapsed: 6.171054785s
Feb 14 00:10:55.920: INFO: Pod "pod-secrets-3615ad3c-9e9f-4b74-96b3-6cbf0e6434ab": Phase="Pending", Reason="", readiness=false. Elapsed: 8.189534768s
Feb 14 00:10:57.959: INFO: Pod "pod-secrets-3615ad3c-9e9f-4b74-96b3-6cbf0e6434ab": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.228232083s
STEP: Saw pod success
Feb 14 00:10:57.959: INFO: Pod "pod-secrets-3615ad3c-9e9f-4b74-96b3-6cbf0e6434ab" satisfied condition "success or failure"
Feb 14 00:10:57.966: INFO: Trying to get logs from node jerma-node pod pod-secrets-3615ad3c-9e9f-4b74-96b3-6cbf0e6434ab container secret-volume-test: 
STEP: delete the pod
Feb 14 00:10:57.996: INFO: Waiting for pod pod-secrets-3615ad3c-9e9f-4b74-96b3-6cbf0e6434ab to disappear
Feb 14 00:10:57.999: INFO: Pod pod-secrets-3615ad3c-9e9f-4b74-96b3-6cbf0e6434ab no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:10:57.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3594" for this suite.
STEP: Destroying namespace "secret-namespace-1689" for this suite.

• [SLOW TEST:10.600 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":280,"completed":74,"skipped":1038,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:10:58.042: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Feb 14 00:11:00.731: INFO: >>> kubeConfig: /root/.kube/config
Feb 14 00:11:04.584: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:11:19.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4314" for this suite.

• [SLOW TEST:21.033 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":280,"completed":75,"skipped":1074,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:11:19.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 00:11:19.128: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:11:27.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7907" for this suite.

• [SLOW TEST:8.404 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":280,"completed":76,"skipped":1095,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:11:27.482: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name secret-emptykey-test-27a55e9b-59b4-441a-be7d-8b71b02f021f
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:11:27.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-5622" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":280,"completed":77,"skipped":1105,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:11:27.601: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-a2bfef5d-0254-4dd0-a047-2672f2ee34e2
STEP: Creating a pod to test consume configMaps
Feb 14 00:11:27.789: INFO: Waiting up to 5m0s for pod "pod-configmaps-b3984066-4603-46d2-852d-ec4117d55ed9" in namespace "configmap-140" to be "success or failure"
Feb 14 00:11:27.880: INFO: Pod "pod-configmaps-b3984066-4603-46d2-852d-ec4117d55ed9": Phase="Pending", Reason="", readiness=false. Elapsed: 89.810674ms
Feb 14 00:11:29.947: INFO: Pod "pod-configmaps-b3984066-4603-46d2-852d-ec4117d55ed9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15764913s
Feb 14 00:11:31.957: INFO: Pod "pod-configmaps-b3984066-4603-46d2-852d-ec4117d55ed9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.166836871s
Feb 14 00:11:33.967: INFO: Pod "pod-configmaps-b3984066-4603-46d2-852d-ec4117d55ed9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.177292977s
Feb 14 00:11:35.996: INFO: Pod "pod-configmaps-b3984066-4603-46d2-852d-ec4117d55ed9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.206126893s
STEP: Saw pod success
Feb 14 00:11:35.996: INFO: Pod "pod-configmaps-b3984066-4603-46d2-852d-ec4117d55ed9" satisfied condition "success or failure"
Feb 14 00:11:36.000: INFO: Trying to get logs from node jerma-node pod pod-configmaps-b3984066-4603-46d2-852d-ec4117d55ed9 container configmap-volume-test: 
STEP: delete the pod
Feb 14 00:11:36.246: INFO: Waiting for pod pod-configmaps-b3984066-4603-46d2-852d-ec4117d55ed9 to disappear
Feb 14 00:11:36.297: INFO: Pod pod-configmaps-b3984066-4603-46d2-852d-ec4117d55ed9 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:11:36.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-140" for this suite.

• [SLOW TEST:8.718 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":78,"skipped":1150,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:11:36.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8525 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8525;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8525 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8525;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8525.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-8525.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8525.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-8525.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8525.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-8525.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8525.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-8525.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8525.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-8525.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8525.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-8525.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8525.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 143.149.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.149.143_udp@PTR;check="$$(dig +tcp +noall +answer +search 143.149.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.149.143_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8525 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8525;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8525 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8525;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-8525.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-8525.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-8525.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-8525.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-8525.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-8525.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-8525.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-8525.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-8525.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-8525.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-8525.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-8525.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8525.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 143.149.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.149.143_udp@PTR;check="$$(dig +tcp +noall +answer +search 143.149.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.149.143_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 14 00:11:48.652: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:48.655: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:48.659: INFO: Unable to read wheezy_udp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:48.667: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:48.672: INFO: Unable to read wheezy_udp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:48.676: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:48.679: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:48.686: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:48.711: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:48.715: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:48.719: INFO: Unable to read jessie_udp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:48.722: INFO: Unable to read jessie_tcp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:48.724: INFO: Unable to read jessie_udp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:48.727: INFO: Unable to read jessie_tcp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:48.730: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:48.732: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:48.748: INFO: Lookups using dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8525 wheezy_tcp@dns-test-service.dns-8525 wheezy_udp@dns-test-service.dns-8525.svc wheezy_tcp@dns-test-service.dns-8525.svc wheezy_udp@_http._tcp.dns-test-service.dns-8525.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8525.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8525 jessie_tcp@dns-test-service.dns-8525 jessie_udp@dns-test-service.dns-8525.svc jessie_tcp@dns-test-service.dns-8525.svc jessie_udp@_http._tcp.dns-test-service.dns-8525.svc jessie_tcp@_http._tcp.dns-test-service.dns-8525.svc]

Feb 14 00:11:53.771: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:53.781: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:53.788: INFO: Unable to read wheezy_udp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:53.798: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:53.808: INFO: Unable to read wheezy_udp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:53.814: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:53.826: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:53.839: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:53.936: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:53.943: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:53.948: INFO: Unable to read jessie_udp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:53.954: INFO: Unable to read jessie_tcp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:53.959: INFO: Unable to read jessie_udp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:53.966: INFO: Unable to read jessie_tcp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:53.971: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:53.977: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:54.013: INFO: Lookups using dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8525 wheezy_tcp@dns-test-service.dns-8525 wheezy_udp@dns-test-service.dns-8525.svc wheezy_tcp@dns-test-service.dns-8525.svc wheezy_udp@_http._tcp.dns-test-service.dns-8525.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8525.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8525 jessie_tcp@dns-test-service.dns-8525 jessie_udp@dns-test-service.dns-8525.svc jessie_tcp@dns-test-service.dns-8525.svc jessie_udp@_http._tcp.dns-test-service.dns-8525.svc jessie_tcp@_http._tcp.dns-test-service.dns-8525.svc]

Feb 14 00:11:58.759: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:58.763: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:58.769: INFO: Unable to read wheezy_udp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:58.775: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:58.779: INFO: Unable to read wheezy_udp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:58.784: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:58.789: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:58.794: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:58.832: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:58.837: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:58.845: INFO: Unable to read jessie_udp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:58.849: INFO: Unable to read jessie_tcp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:58.854: INFO: Unable to read jessie_udp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:58.858: INFO: Unable to read jessie_tcp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:58.861: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:58.866: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:11:58.894: INFO: Lookups using dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8525 wheezy_tcp@dns-test-service.dns-8525 wheezy_udp@dns-test-service.dns-8525.svc wheezy_tcp@dns-test-service.dns-8525.svc wheezy_udp@_http._tcp.dns-test-service.dns-8525.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8525.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8525 jessie_tcp@dns-test-service.dns-8525 jessie_udp@dns-test-service.dns-8525.svc jessie_tcp@dns-test-service.dns-8525.svc jessie_udp@_http._tcp.dns-test-service.dns-8525.svc jessie_tcp@_http._tcp.dns-test-service.dns-8525.svc]

Feb 14 00:12:03.765: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:03.777: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:03.787: INFO: Unable to read wheezy_udp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:03.797: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:03.805: INFO: Unable to read wheezy_udp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:03.811: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:03.816: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:03.822: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:03.892: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:03.898: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:03.901: INFO: Unable to read jessie_udp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:03.906: INFO: Unable to read jessie_tcp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:03.911: INFO: Unable to read jessie_udp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:03.917: INFO: Unable to read jessie_tcp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:03.920: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:03.924: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:04.003: INFO: Lookups using dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8525 wheezy_tcp@dns-test-service.dns-8525 wheezy_udp@dns-test-service.dns-8525.svc wheezy_tcp@dns-test-service.dns-8525.svc wheezy_udp@_http._tcp.dns-test-service.dns-8525.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8525.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8525 jessie_tcp@dns-test-service.dns-8525 jessie_udp@dns-test-service.dns-8525.svc jessie_tcp@dns-test-service.dns-8525.svc jessie_udp@_http._tcp.dns-test-service.dns-8525.svc jessie_tcp@_http._tcp.dns-test-service.dns-8525.svc]

Feb 14 00:12:08.757: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:08.762: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:08.766: INFO: Unable to read wheezy_udp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:08.770: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:08.774: INFO: Unable to read wheezy_udp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:08.777: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:08.780: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:08.785: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:08.826: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:08.832: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:08.836: INFO: Unable to read jessie_udp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:08.854: INFO: Unable to read jessie_tcp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:08.876: INFO: Unable to read jessie_udp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:08.885: INFO: Unable to read jessie_tcp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:08.891: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:08.898: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:08.939: INFO: Lookups using dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8525 wheezy_tcp@dns-test-service.dns-8525 wheezy_udp@dns-test-service.dns-8525.svc wheezy_tcp@dns-test-service.dns-8525.svc wheezy_udp@_http._tcp.dns-test-service.dns-8525.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8525.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8525 jessie_tcp@dns-test-service.dns-8525 jessie_udp@dns-test-service.dns-8525.svc jessie_tcp@dns-test-service.dns-8525.svc jessie_udp@_http._tcp.dns-test-service.dns-8525.svc jessie_tcp@_http._tcp.dns-test-service.dns-8525.svc]

Feb 14 00:12:13.762: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:13.768: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:13.775: INFO: Unable to read wheezy_udp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:13.782: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:13.788: INFO: Unable to read wheezy_udp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:13.792: INFO: Unable to read wheezy_tcp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:13.798: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:13.802: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:13.843: INFO: Unable to read jessie_udp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:13.848: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:13.852: INFO: Unable to read jessie_udp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:13.857: INFO: Unable to read jessie_tcp@dns-test-service.dns-8525 from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:13.872: INFO: Unable to read jessie_udp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:13.879: INFO: Unable to read jessie_tcp@dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:13.888: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:13.894: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-8525.svc from pod dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820: the server could not find the requested resource (get pods dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820)
Feb 14 00:12:13.928: INFO: Lookups using dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-8525 wheezy_tcp@dns-test-service.dns-8525 wheezy_udp@dns-test-service.dns-8525.svc wheezy_tcp@dns-test-service.dns-8525.svc wheezy_udp@_http._tcp.dns-test-service.dns-8525.svc wheezy_tcp@_http._tcp.dns-test-service.dns-8525.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-8525 jessie_tcp@dns-test-service.dns-8525 jessie_udp@dns-test-service.dns-8525.svc jessie_tcp@dns-test-service.dns-8525.svc jessie_udp@_http._tcp.dns-test-service.dns-8525.svc jessie_tcp@_http._tcp.dns-test-service.dns-8525.svc]

Feb 14 00:12:18.998: INFO: DNS probes using dns-8525/dns-test-42d78c6c-fbe6-44d6-9d02-472dcdf7f820 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:12:19.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8525" for this suite.

• [SLOW TEST:43.244 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":280,"completed":79,"skipped":1171,"failed":0}
SSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:12:19.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating replication controller my-hostname-basic-cfd576de-b17b-4b9a-87df-0f2fc61fca4f
Feb 14 00:12:21.194: INFO: Pod name my-hostname-basic-cfd576de-b17b-4b9a-87df-0f2fc61fca4f: Found 0 pods out of 1
Feb 14 00:12:26.202: INFO: Pod name my-hostname-basic-cfd576de-b17b-4b9a-87df-0f2fc61fca4f: Found 1 pods out of 1
Feb 14 00:12:26.203: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-cfd576de-b17b-4b9a-87df-0f2fc61fca4f" are running
Feb 14 00:12:30.226: INFO: Pod "my-hostname-basic-cfd576de-b17b-4b9a-87df-0f2fc61fca4f-kg5xw" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 00:12:21 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 00:12:21 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-cfd576de-b17b-4b9a-87df-0f2fc61fca4f]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 00:12:21 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-cfd576de-b17b-4b9a-87df-0f2fc61fca4f]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 00:12:21 +0000 UTC Reason: Message:}])
Feb 14 00:12:30.226: INFO: Trying to dial the pod
Feb 14 00:12:35.250: INFO: Controller my-hostname-basic-cfd576de-b17b-4b9a-87df-0f2fc61fca4f: Got expected result from replica 1 [my-hostname-basic-cfd576de-b17b-4b9a-87df-0f2fc61fca4f-kg5xw]: "my-hostname-basic-cfd576de-b17b-4b9a-87df-0f2fc61fca4f-kg5xw", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:12:35.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4835" for this suite.

• [SLOW TEST:15.702 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":280,"completed":80,"skipped":1174,"failed":0}
S
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:12:35.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Feb 14 00:12:35.366: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:12:52.100: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-505" for this suite.

• [SLOW TEST:16.891 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":280,"completed":81,"skipped":1175,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:12:52.167: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service nodeport-test with type=NodePort in namespace services-3537
STEP: creating replication controller nodeport-test in namespace services-3537
I0214 00:12:52.346473       9 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-3537, replica count: 2
I0214 00:12:55.397720       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 00:12:58.398964       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 00:13:01.400075       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 00:13:04.400978       9 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 14 00:13:04.401: INFO: Creating new exec pod
Feb 14 00:13:15.733: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3537 execpod745g4 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Feb 14 00:13:16.258: INFO: stderr: "I0214 00:13:15.981603    1132 log.go:172] (0xc000b60000) (0xc000aee000) Create stream\nI0214 00:13:15.981991    1132 log.go:172] (0xc000b60000) (0xc000aee000) Stream added, broadcasting: 1\nI0214 00:13:16.006571    1132 log.go:172] (0xc000b60000) Reply frame received for 1\nI0214 00:13:16.006808    1132 log.go:172] (0xc000b60000) (0xc00095e000) Create stream\nI0214 00:13:16.006835    1132 log.go:172] (0xc000b60000) (0xc00095e000) Stream added, broadcasting: 3\nI0214 00:13:16.009074    1132 log.go:172] (0xc000b60000) Reply frame received for 3\nI0214 00:13:16.009148    1132 log.go:172] (0xc000b60000) (0xc000aee0a0) Create stream\nI0214 00:13:16.009166    1132 log.go:172] (0xc000b60000) (0xc000aee0a0) Stream added, broadcasting: 5\nI0214 00:13:16.012676    1132 log.go:172] (0xc000b60000) Reply frame received for 5\nI0214 00:13:16.152785    1132 log.go:172] (0xc000b60000) Data frame received for 5\nI0214 00:13:16.152834    1132 log.go:172] (0xc000aee0a0) (5) Data frame handling\nI0214 00:13:16.152863    1132 log.go:172] (0xc000aee0a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0214 00:13:16.160524    1132 log.go:172] (0xc000b60000) Data frame received for 5\nI0214 00:13:16.160553    1132 log.go:172] (0xc000aee0a0) (5) Data frame handling\nI0214 00:13:16.160573    1132 log.go:172] (0xc000aee0a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0214 00:13:16.247032    1132 log.go:172] (0xc000b60000) Data frame received for 1\nI0214 00:13:16.247376    1132 log.go:172] (0xc000b60000) (0xc000aee0a0) Stream removed, broadcasting: 5\nI0214 00:13:16.247445    1132 log.go:172] (0xc000aee000) (1) Data frame handling\nI0214 00:13:16.247566    1132 log.go:172] (0xc000aee000) (1) Data frame sent\nI0214 00:13:16.247604    1132 log.go:172] (0xc000b60000) (0xc00095e000) Stream removed, broadcasting: 3\nI0214 00:13:16.247643    1132 log.go:172] (0xc000b60000) (0xc000aee000) Stream removed, broadcasting: 1\nI0214 00:13:16.247678    1132 log.go:172] (0xc000b60000) Go away received\nI0214 00:13:16.248758    1132 log.go:172] (0xc000b60000) (0xc000aee000) Stream removed, broadcasting: 1\nI0214 00:13:16.248772    1132 log.go:172] (0xc000b60000) (0xc00095e000) Stream removed, broadcasting: 3\nI0214 00:13:16.248782    1132 log.go:172] (0xc000b60000) (0xc000aee0a0) Stream removed, broadcasting: 5\n"
Feb 14 00:13:16.258: INFO: stdout: ""
Feb 14 00:13:16.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3537 execpod745g4 -- /bin/sh -x -c nc -zv -t -w 2 10.96.118.102 80'
Feb 14 00:13:16.619: INFO: stderr: "I0214 00:13:16.400169    1151 log.go:172] (0xc000a736b0) (0xc0009b46e0) Create stream\nI0214 00:13:16.400308    1151 log.go:172] (0xc000a736b0) (0xc0009b46e0) Stream added, broadcasting: 1\nI0214 00:13:16.409905    1151 log.go:172] (0xc000a736b0) Reply frame received for 1\nI0214 00:13:16.409938    1151 log.go:172] (0xc000a736b0) (0xc0006346e0) Create stream\nI0214 00:13:16.409945    1151 log.go:172] (0xc000a736b0) (0xc0006346e0) Stream added, broadcasting: 3\nI0214 00:13:16.411201    1151 log.go:172] (0xc000a736b0) Reply frame received for 3\nI0214 00:13:16.411237    1151 log.go:172] (0xc000a736b0) (0xc000543180) Create stream\nI0214 00:13:16.411250    1151 log.go:172] (0xc000a736b0) (0xc000543180) Stream added, broadcasting: 5\nI0214 00:13:16.412882    1151 log.go:172] (0xc000a736b0) Reply frame received for 5\nI0214 00:13:16.487470    1151 log.go:172] (0xc000a736b0) Data frame received for 5\nI0214 00:13:16.487879    1151 log.go:172] (0xc000543180) (5) Data frame handling\nI0214 00:13:16.487942    1151 log.go:172] (0xc000543180) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.118.102 80\nI0214 00:13:16.494387    1151 log.go:172] (0xc000a736b0) Data frame received for 5\nI0214 00:13:16.494525    1151 log.go:172] (0xc000543180) (5) Data frame handling\nI0214 00:13:16.494615    1151 log.go:172] (0xc000543180) (5) Data frame sent\nConnection to 10.96.118.102 80 port [tcp/http] succeeded!\nI0214 00:13:16.608287    1151 log.go:172] (0xc000a736b0) (0xc0006346e0) Stream removed, broadcasting: 3\nI0214 00:13:16.608367    1151 log.go:172] (0xc000a736b0) Data frame received for 1\nI0214 00:13:16.608375    1151 log.go:172] (0xc0009b46e0) (1) Data frame handling\nI0214 00:13:16.608386    1151 log.go:172] (0xc0009b46e0) (1) Data frame sent\nI0214 00:13:16.608392    1151 log.go:172] (0xc000a736b0) (0xc0009b46e0) Stream removed, broadcasting: 1\nI0214 00:13:16.608754    1151 log.go:172] (0xc000a736b0) (0xc000543180) Stream removed, broadcasting: 5\nI0214 00:13:16.608839    1151 log.go:172] (0xc000a736b0) Go away received\nI0214 00:13:16.609028    1151 log.go:172] (0xc000a736b0) (0xc0009b46e0) Stream removed, broadcasting: 1\nI0214 00:13:16.609043    1151 log.go:172] (0xc000a736b0) (0xc0006346e0) Stream removed, broadcasting: 3\nI0214 00:13:16.609054    1151 log.go:172] (0xc000a736b0) (0xc000543180) Stream removed, broadcasting: 5\n"
Feb 14 00:13:16.619: INFO: stdout: ""
Feb 14 00:13:16.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3537 execpod745g4 -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32355'
Feb 14 00:13:17.056: INFO: stderr: "I0214 00:13:16.817737    1171 log.go:172] (0xc000b0e0b0) (0xc00078a000) Create stream\nI0214 00:13:16.817895    1171 log.go:172] (0xc000b0e0b0) (0xc00078a000) Stream added, broadcasting: 1\nI0214 00:13:16.837650    1171 log.go:172] (0xc000b0e0b0) Reply frame received for 1\nI0214 00:13:16.837751    1171 log.go:172] (0xc000b0e0b0) (0xc00071c0a0) Create stream\nI0214 00:13:16.837767    1171 log.go:172] (0xc000b0e0b0) (0xc00071c0a0) Stream added, broadcasting: 3\nI0214 00:13:16.839599    1171 log.go:172] (0xc000b0e0b0) Reply frame received for 3\nI0214 00:13:16.839646    1171 log.go:172] (0xc000b0e0b0) (0xc00078a0a0) Create stream\nI0214 00:13:16.839660    1171 log.go:172] (0xc000b0e0b0) (0xc00078a0a0) Stream added, broadcasting: 5\nI0214 00:13:16.840935    1171 log.go:172] (0xc000b0e0b0) Reply frame received for 5\nI0214 00:13:16.912791    1171 log.go:172] (0xc000b0e0b0) Data frame received for 5\nI0214 00:13:16.912918    1171 log.go:172] (0xc00078a0a0) (5) Data frame handling\nI0214 00:13:16.912953    1171 log.go:172] (0xc00078a0a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32355\nI0214 00:13:16.914864    1171 log.go:172] (0xc000b0e0b0) Data frame received for 5\nI0214 00:13:16.914950    1171 log.go:172] (0xc00078a0a0) (5) Data frame handling\nI0214 00:13:16.914971    1171 log.go:172] (0xc00078a0a0) (5) Data frame sent\nConnection to 10.96.2.250 32355 port [tcp/32355] succeeded!\nI0214 00:13:17.032457    1171 log.go:172] (0xc000b0e0b0) (0xc00071c0a0) Stream removed, broadcasting: 3\nI0214 00:13:17.032633    1171 log.go:172] (0xc000b0e0b0) Data frame received for 1\nI0214 00:13:17.032693    1171 log.go:172] (0xc000b0e0b0) (0xc00078a0a0) Stream removed, broadcasting: 5\nI0214 00:13:17.032789    1171 log.go:172] (0xc00078a000) (1) Data frame handling\nI0214 00:13:17.032823    1171 log.go:172] (0xc00078a000) (1) Data frame sent\nI0214 00:13:17.032839    1171 log.go:172] (0xc000b0e0b0) (0xc00078a000) Stream removed, broadcasting: 1\nI0214 00:13:17.032867    1171 log.go:172] (0xc000b0e0b0) Go away received\nI0214 00:13:17.033924    1171 log.go:172] (0xc000b0e0b0) (0xc00078a000) Stream removed, broadcasting: 1\nI0214 00:13:17.033947    1171 log.go:172] (0xc000b0e0b0) (0xc00071c0a0) Stream removed, broadcasting: 3\nI0214 00:13:17.033957    1171 log.go:172] (0xc000b0e0b0) (0xc00078a0a0) Stream removed, broadcasting: 5\n"
Feb 14 00:13:17.056: INFO: stdout: ""
Feb 14 00:13:17.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3537 execpod745g4 -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32355'
Feb 14 00:13:17.511: INFO: stderr: "I0214 00:13:17.307149    1187 log.go:172] (0xc0009c1080) (0xc0009c66e0) Create stream\nI0214 00:13:17.307514    1187 log.go:172] (0xc0009c1080) (0xc0009c66e0) Stream added, broadcasting: 1\nI0214 00:13:17.314951    1187 log.go:172] (0xc0009c1080) Reply frame received for 1\nI0214 00:13:17.315071    1187 log.go:172] (0xc0009c1080) (0xc000954320) Create stream\nI0214 00:13:17.315113    1187 log.go:172] (0xc0009c1080) (0xc000954320) Stream added, broadcasting: 3\nI0214 00:13:17.320509    1187 log.go:172] (0xc0009c1080) Reply frame received for 3\nI0214 00:13:17.320595    1187 log.go:172] (0xc0009c1080) (0xc0009543c0) Create stream\nI0214 00:13:17.320611    1187 log.go:172] (0xc0009c1080) (0xc0009543c0) Stream added, broadcasting: 5\nI0214 00:13:17.322705    1187 log.go:172] (0xc0009c1080) Reply frame received for 5\nI0214 00:13:17.408343    1187 log.go:172] (0xc0009c1080) Data frame received for 5\nI0214 00:13:17.408502    1187 log.go:172] (0xc0009543c0) (5) Data frame handling\nI0214 00:13:17.408559    1187 log.go:172] (0xc0009543c0) (5) Data frame sent\nI0214 00:13:17.408575    1187 log.go:172] (0xc0009c1080) Data frame received for 5\nI0214 00:13:17.408591    1187 log.go:172] (0xc0009543c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.1.234 32355\nI0214 00:13:17.408682    1187 log.go:172] (0xc0009543c0) (5) Data frame sent\nI0214 00:13:17.411780    1187 log.go:172] (0xc0009c1080) Data frame received for 5\nI0214 00:13:17.411994    1187 log.go:172] (0xc0009543c0) (5) Data frame handling\nI0214 00:13:17.412057    1187 log.go:172] (0xc0009543c0) (5) Data frame sent\nConnection to 10.96.1.234 32355 port [tcp/32355] succeeded!\nI0214 00:13:17.492010    1187 log.go:172] (0xc0009c1080) Data frame received for 1\nI0214 00:13:17.492641    1187 log.go:172] (0xc0009c1080) (0xc0009543c0) Stream removed, broadcasting: 5\nI0214 00:13:17.492725    1187 log.go:172] (0xc0009c66e0) (1) Data frame handling\nI0214 00:13:17.492759    1187 log.go:172] (0xc0009c66e0) (1) Data frame sent\nI0214 00:13:17.492814    1187 log.go:172] (0xc0009c1080) (0xc000954320) Stream removed, broadcasting: 3\nI0214 00:13:17.492882    1187 log.go:172] (0xc0009c1080) (0xc0009c66e0) Stream removed, broadcasting: 1\nI0214 00:13:17.492971    1187 log.go:172] (0xc0009c1080) Go away received\nI0214 00:13:17.495695    1187 log.go:172] (0xc0009c1080) (0xc0009c66e0) Stream removed, broadcasting: 1\nI0214 00:13:17.495766    1187 log.go:172] (0xc0009c1080) (0xc000954320) Stream removed, broadcasting: 3\nI0214 00:13:17.495791    1187 log.go:172] (0xc0009c1080) (0xc0009543c0) Stream removed, broadcasting: 5\n"
Feb 14 00:13:17.512: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:13:17.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3537" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:25.365 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":280,"completed":82,"skipped":1207,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:13:17.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Feb 14 00:13:17.636: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:13:17.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4027" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":280,"completed":83,"skipped":1232,"failed":0}
S
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:13:17.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Feb 14 00:13:37.985: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-9708 PodName:pod-sharedvolume-9e649061-4047-41b1-ae6d-1097c8533d34 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 00:13:37.985: INFO: >>> kubeConfig: /root/.kube/config
I0214 00:13:38.035246       9 log.go:172] (0xc002b1a2c0) (0xc0022e2fa0) Create stream
I0214 00:13:38.035772       9 log.go:172] (0xc002b1a2c0) (0xc0022e2fa0) Stream added, broadcasting: 1
I0214 00:13:38.044998       9 log.go:172] (0xc002b1a2c0) Reply frame received for 1
I0214 00:13:38.045073       9 log.go:172] (0xc002b1a2c0) (0xc000d39400) Create stream
I0214 00:13:38.045090       9 log.go:172] (0xc002b1a2c0) (0xc000d39400) Stream added, broadcasting: 3
I0214 00:13:38.047223       9 log.go:172] (0xc002b1a2c0) Reply frame received for 3
I0214 00:13:38.047287       9 log.go:172] (0xc002b1a2c0) (0xc001f0a000) Create stream
I0214 00:13:38.047310       9 log.go:172] (0xc002b1a2c0) (0xc001f0a000) Stream added, broadcasting: 5
I0214 00:13:38.049775       9 log.go:172] (0xc002b1a2c0) Reply frame received for 5
I0214 00:13:38.145906       9 log.go:172] (0xc002b1a2c0) Data frame received for 3
I0214 00:13:38.146028       9 log.go:172] (0xc000d39400) (3) Data frame handling
I0214 00:13:38.146061       9 log.go:172] (0xc000d39400) (3) Data frame sent
I0214 00:13:38.215419       9 log.go:172] (0xc002b1a2c0) (0xc000d39400) Stream removed, broadcasting: 3
I0214 00:13:38.215673       9 log.go:172] (0xc002b1a2c0) Data frame received for 1
I0214 00:13:38.215705       9 log.go:172] (0xc0022e2fa0) (1) Data frame handling
I0214 00:13:38.215737       9 log.go:172] (0xc0022e2fa0) (1) Data frame sent
I0214 00:13:38.215756       9 log.go:172] (0xc002b1a2c0) (0xc0022e2fa0) Stream removed, broadcasting: 1
I0214 00:13:38.216356       9 log.go:172] (0xc002b1a2c0) (0xc001f0a000) Stream removed, broadcasting: 5
I0214 00:13:38.216502       9 log.go:172] (0xc002b1a2c0) (0xc0022e2fa0) Stream removed, broadcasting: 1
I0214 00:13:38.216517       9 log.go:172] (0xc002b1a2c0) (0xc000d39400) Stream removed, broadcasting: 3
I0214 00:13:38.216667       9 log.go:172] (0xc002b1a2c0) (0xc001f0a000) Stream removed, broadcasting: 5
I0214 00:13:38.216831       9 log.go:172] (0xc002b1a2c0) Go away received
Feb 14 00:13:38.216: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:13:38.217: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9708" for this suite.

• [SLOW TEST:20.393 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":280,"completed":84,"skipped":1233,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:13:38.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 14 00:13:38.294: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 14 00:13:38.310: INFO: Waiting for terminating namespaces to be deleted...
Feb 14 00:13:38.313: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 14 00:13:38.328: INFO: pod-sharedvolume-9e649061-4047-41b1-ae6d-1097c8533d34 from emptydir-9708 started at 2020-02-14 00:13:18 +0000 UTC (2 container statuses recorded)
Feb 14 00:13:38.328: INFO: 	Container busybox-main-container ready: true, restart count 0
Feb 14 00:13:38.328: INFO: 	Container busybox-sub-container ready: false, restart count 0
Feb 14 00:13:38.328: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 14 00:13:38.328: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 00:13:38.328: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 14 00:13:38.328: INFO: 	Container weave ready: true, restart count 1
Feb 14 00:13:38.328: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 00:13:38.328: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 14 00:13:38.345: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 14 00:13:38.345: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 14 00:13:38.345: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 14 00:13:38.345: INFO: 	Container etcd ready: true, restart count 1
Feb 14 00:13:38.345: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 14 00:13:38.345: INFO: 	Container coredns ready: true, restart count 0
Feb 14 00:13:38.345: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 14 00:13:38.345: INFO: 	Container coredns ready: true, restart count 0
Feb 14 00:13:38.345: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 14 00:13:38.345: INFO: 	Container kube-controller-manager ready: true, restart count 7
Feb 14 00:13:38.345: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 14 00:13:38.345: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 00:13:38.345: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 14 00:13:38.345: INFO: 	Container weave ready: true, restart count 0
Feb 14 00:13:38.345: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 00:13:38.345: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 14 00:13:38.345: INFO: 	Container kube-scheduler ready: true, restart count 11
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-284a7fff-3778-4b72-957d-d566659c6c7d 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-284a7fff-3778-4b72-957d-d566659c6c7d off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-284a7fff-3778-4b72-957d-d566659c6c7d
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:18:58.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5517" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:320.788 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":280,"completed":85,"skipped":1249,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:18:59.025: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-0a1fbf92-38a1-44a8-a52c-7268bc250c7a
STEP: Creating a pod to test consume configMaps
Feb 14 00:18:59.130: INFO: Waiting up to 5m0s for pod "pod-configmaps-f7f52ed4-05d5-4a87-a226-ca094de8985d" in namespace "configmap-6081" to be "success or failure"
Feb 14 00:18:59.144: INFO: Pod "pod-configmaps-f7f52ed4-05d5-4a87-a226-ca094de8985d": Phase="Pending", Reason="", readiness=false. Elapsed: 13.922067ms
Feb 14 00:19:01.158: INFO: Pod "pod-configmaps-f7f52ed4-05d5-4a87-a226-ca094de8985d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027620985s
Feb 14 00:19:03.171: INFO: Pod "pod-configmaps-f7f52ed4-05d5-4a87-a226-ca094de8985d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041254918s
Feb 14 00:19:05.235: INFO: Pod "pod-configmaps-f7f52ed4-05d5-4a87-a226-ca094de8985d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104851432s
Feb 14 00:19:07.258: INFO: Pod "pod-configmaps-f7f52ed4-05d5-4a87-a226-ca094de8985d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.128056859s
Feb 14 00:19:09.266: INFO: Pod "pod-configmaps-f7f52ed4-05d5-4a87-a226-ca094de8985d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.136368079s
STEP: Saw pod success
Feb 14 00:19:09.267: INFO: Pod "pod-configmaps-f7f52ed4-05d5-4a87-a226-ca094de8985d" satisfied condition "success or failure"
Feb 14 00:19:09.279: INFO: Trying to get logs from node jerma-node pod pod-configmaps-f7f52ed4-05d5-4a87-a226-ca094de8985d container configmap-volume-test: 
STEP: delete the pod
Feb 14 00:19:09.452: INFO: Waiting for pod pod-configmaps-f7f52ed4-05d5-4a87-a226-ca094de8985d to disappear
Feb 14 00:19:09.595: INFO: Pod pod-configmaps-f7f52ed4-05d5-4a87-a226-ca094de8985d no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:19:09.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6081" for this suite.

• [SLOW TEST:10.589 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":86,"skipped":1266,"failed":0}
SS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:19:09.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:19:19.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4457" for this suite.

• [SLOW TEST:10.230 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":280,"completed":87,"skipped":1268,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:19:19.847: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0214 00:19:31.735784       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 00:19:31.735: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:19:31.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9868" for this suite.

• [SLOW TEST:11.898 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":280,"completed":88,"skipped":1277,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:19:31.746: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Feb 14 00:19:59.425: INFO: &Pod{ObjectMeta:{send-events-80466402-c602-4017-a3db-987b11090f42  events-5699 /api/v1/namespaces/events-5699/pods/send-events-80466402-c602-4017-a3db-987b11090f42 9d631046-23d7-4a10-9206-3bf6f5a2e15c 8269749 0 2020-02-14 00:19:32 +0000 UTC   map[name:foo time:413189556] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-dw2tn,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-dw2tn,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-dw2tn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:19:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:19:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:19:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:19:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-14 00:19:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-14 00:19:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://c9800db48e34309682e1efad53229388dc84ce2b070d5635c973374813ce196b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Feb 14 00:20:01.440: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Feb 14 00:20:03.956: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:20:03.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-5699" for this suite.

• [SLOW TEST:32.291 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":280,"completed":89,"skipped":1292,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:20:04.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating all guestbook components
Feb 14 00:20:04.206: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Feb 14 00:20:04.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7190'
Feb 14 00:20:06.778: INFO: stderr: ""
Feb 14 00:20:06.778: INFO: stdout: "service/agnhost-slave created\n"
Feb 14 00:20:06.779: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Feb 14 00:20:06.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7190'
Feb 14 00:20:07.355: INFO: stderr: ""
Feb 14 00:20:07.355: INFO: stdout: "service/agnhost-master created\n"
Feb 14 00:20:07.356: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Feb 14 00:20:07.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7190'
Feb 14 00:20:07.840: INFO: stderr: ""
Feb 14 00:20:07.840: INFO: stdout: "service/frontend created\n"
Feb 14 00:20:07.841: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Feb 14 00:20:07.841: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7190'
Feb 14 00:20:08.347: INFO: stderr: ""
Feb 14 00:20:08.348: INFO: stdout: "deployment.apps/frontend created\n"
Feb 14 00:20:08.349: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 14 00:20:08.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7190'
Feb 14 00:20:09.045: INFO: stderr: ""
Feb 14 00:20:09.045: INFO: stdout: "deployment.apps/agnhost-master created\n"
Feb 14 00:20:09.046: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Feb 14 00:20:09.046: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7190'
Feb 14 00:20:10.965: INFO: stderr: ""
Feb 14 00:20:10.965: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Feb 14 00:20:10.965: INFO: Waiting for all frontend pods to be Running.
Feb 14 00:20:36.024: INFO: Waiting for frontend to serve content.
Feb 14 00:20:36.046: INFO: Trying to add a new entry to the guestbook.
Feb 14 00:20:36.079: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:20:41.427: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:20:46.448: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:20:51.486: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:20:56.785: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:21:01.836: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:21:06.867: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:21:11.893: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:21:16.984: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:21:22.016: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:21:27.031: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:21:32.055: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:21:37.077: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:21:42.109: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:21:47.129: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:21:52.155: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:21:57.174: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:22:02.205: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:22:07.226: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:22:12.254: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:22:17.276: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:22:22.302: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:22:28.190: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:22:33.208: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:22:38.229: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:22:43.249: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:22:48.280: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:22:53.311: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:22:58.335: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:23:03.354: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:23:08.378: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:23:13.448: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:23:18.470: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:23:23.497: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:23:28.526: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:23:33.557: INFO: Failed to get response from guestbook. err: , response: encountered error while propagating to slave '10.32.0.1': Get http://10.32.0.1:6379/set?key=messages&value=TestEntry: dial tcp 10.32.0.1:6379: connect: connection refused

Feb 14 00:23:38.561: FAIL: Cannot added new entry in 180 seconds.

Full Stack Trace
k8s.io/kubernetes/test/e2e/kubectl.validateGuestbookApp(0x551f740, 0xc0019ce6e0, 0xc000727540, 0xc)
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339 +0x551
k8s.io/kubernetes/test/e2e/kubectl.glob..func2.7.2()
	/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:420 +0x165
k8s.io/kubernetes/test/e2e.RunE2ETests(0xc002eeda00)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e.go:110 +0x30a
k8s.io/kubernetes/test/e2e.TestE2E(0xc002eeda00)
	_output/local/go/src/k8s.io/kubernetes/test/e2e/e2e_test.go:111 +0x2b
testing.tRunner(0xc002eeda00, 0x4c9f938)
	/usr/local/go/src/testing/testing.go:909 +0xc9
created by testing.(*T).Run
	/usr/local/go/src/testing/testing.go:960 +0x350
STEP: using delete to clean up resources
Feb 14 00:23:38.565: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7190'
Feb 14 00:23:38.852: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 00:23:38.853: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Feb 14 00:23:38.855: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7190'
Feb 14 00:23:39.114: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 00:23:39.114: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 14 00:23:39.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7190'
Feb 14 00:23:39.337: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 00:23:39.337: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 14 00:23:39.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7190'
Feb 14 00:23:39.486: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 00:23:39.486: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Feb 14 00:23:39.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7190'
Feb 14 00:23:39.607: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 00:23:39.607: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Feb 14 00:23:39.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7190'
Feb 14 00:23:39.697: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 00:23:39.697: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
STEP: Collecting events from namespace "kubectl-7190".
STEP: Found 33 events.
Feb 14 00:23:39.704: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-master-74c46fb7d4-p8vsz: {default-scheduler } Scheduled: Successfully assigned kubectl-7190/agnhost-master-74c46fb7d4-p8vsz to jerma-node
Feb 14 00:23:39.704: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-7286g: {default-scheduler } Scheduled: Successfully assigned kubectl-7190/agnhost-slave-774cfc759f-7286g to jerma-server-mvvl6gufaqub
Feb 14 00:23:39.704: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for agnhost-slave-774cfc759f-wbznt: {default-scheduler } Scheduled: Successfully assigned kubectl-7190/agnhost-slave-774cfc759f-wbznt to jerma-node
Feb 14 00:23:39.704: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-c4csj: {default-scheduler } Scheduled: Successfully assigned kubectl-7190/frontend-6c5f89d5d4-c4csj to jerma-node
Feb 14 00:23:39.704: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-mgxkf: {default-scheduler } Scheduled: Successfully assigned kubectl-7190/frontend-6c5f89d5d4-mgxkf to jerma-server-mvvl6gufaqub
Feb 14 00:23:39.704: INFO: At 0001-01-01 00:00:00 +0000 UTC - event for frontend-6c5f89d5d4-xglrk: {default-scheduler } Scheduled: Successfully assigned kubectl-7190/frontend-6c5f89d5d4-xglrk to jerma-node
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:08 +0000 UTC - event for frontend: {deployment-controller } ScalingReplicaSet: Scaled up replica set frontend-6c5f89d5d4 to 3
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:08 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-xglrk
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:08 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-mgxkf
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:08 +0000 UTC - event for frontend-6c5f89d5d4: {replicaset-controller } SuccessfulCreate: Created pod: frontend-6c5f89d5d4-c4csj
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:09 +0000 UTC - event for agnhost-master: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-master-74c46fb7d4 to 1
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:09 +0000 UTC - event for agnhost-master-74c46fb7d4: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-master-74c46fb7d4-p8vsz
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:11 +0000 UTC - event for agnhost-slave: {deployment-controller } ScalingReplicaSet: Scaled up replica set agnhost-slave-774cfc759f to 2
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:11 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-wbznt
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:11 +0000 UTC - event for agnhost-slave-774cfc759f: {replicaset-controller } SuccessfulCreate: Created pod: agnhost-slave-774cfc759f-7286g
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:17 +0000 UTC - event for frontend-6c5f89d5d4-mgxkf: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:18 +0000 UTC - event for frontend-6c5f89d5d4-xglrk: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:19 +0000 UTC - event for agnhost-slave-774cfc759f-7286g: {kubelet jerma-server-mvvl6gufaqub} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:24 +0000 UTC - event for frontend-6c5f89d5d4-c4csj: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:25 +0000 UTC - event for frontend-6c5f89d5d4-mgxkf: {kubelet jerma-server-mvvl6gufaqub} Created: Created container guestbook-frontend
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:26 +0000 UTC - event for agnhost-slave-774cfc759f-7286g: {kubelet jerma-server-mvvl6gufaqub} Created: Created container slave
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:26 +0000 UTC - event for agnhost-slave-774cfc759f-7286g: {kubelet jerma-server-mvvl6gufaqub} Started: Started container slave
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:26 +0000 UTC - event for frontend-6c5f89d5d4-mgxkf: {kubelet jerma-server-mvvl6gufaqub} Started: Started container guestbook-frontend
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:27 +0000 UTC - event for agnhost-master-74c46fb7d4-p8vsz: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:27 +0000 UTC - event for agnhost-slave-774cfc759f-wbznt: {kubelet jerma-node} Pulled: Container image "gcr.io/kubernetes-e2e-test-images/agnhost:2.8" already present on machine
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:30 +0000 UTC - event for frontend-6c5f89d5d4-c4csj: {kubelet jerma-node} Created: Created container guestbook-frontend
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:30 +0000 UTC - event for frontend-6c5f89d5d4-xglrk: {kubelet jerma-node} Created: Created container guestbook-frontend
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:31 +0000 UTC - event for agnhost-master-74c46fb7d4-p8vsz: {kubelet jerma-node} Started: Started container master
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:31 +0000 UTC - event for agnhost-master-74c46fb7d4-p8vsz: {kubelet jerma-node} Created: Created container master
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:31 +0000 UTC - event for agnhost-slave-774cfc759f-wbznt: {kubelet jerma-node} Created: Created container slave
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:31 +0000 UTC - event for agnhost-slave-774cfc759f-wbznt: {kubelet jerma-node} Started: Started container slave
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:31 +0000 UTC - event for frontend-6c5f89d5d4-c4csj: {kubelet jerma-node} Started: Started container guestbook-frontend
Feb 14 00:23:39.704: INFO: At 2020-02-14 00:20:31 +0000 UTC - event for frontend-6c5f89d5d4-xglrk: {kubelet jerma-node} Started: Started container guestbook-frontend
Feb 14 00:23:39.708: INFO: POD                              NODE                       PHASE    GRACE  CONDITIONS
Feb 14 00:23:39.708: INFO: agnhost-master-74c46fb7d4-p8vsz  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:10 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:09 +0000 UTC  }]
Feb 14 00:23:39.708: INFO: agnhost-slave-774cfc759f-7286g   jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:11 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:11 +0000 UTC  }]
Feb 14 00:23:39.708: INFO: agnhost-slave-774cfc759f-wbznt   jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:12 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:33 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:33 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:11 +0000 UTC  }]
Feb 14 00:23:39.708: INFO: frontend-6c5f89d5d4-c4csj        jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:31 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:31 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:08 +0000 UTC  }]
Feb 14 00:23:39.708: INFO: frontend-6c5f89d5d4-mgxkf        jerma-server-mvvl6gufaqub  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:27 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:27 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:08 +0000 UTC  }]
Feb 14 00:23:39.708: INFO: frontend-6c5f89d5d4-xglrk        jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:08 +0000 UTC  } {Ready True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:32 +0000 UTC  } {ContainersReady True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:32 +0000 UTC  } {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-02-14 00:20:08 +0000 UTC  }]
Feb 14 00:23:39.708: INFO: 
Feb 14 00:23:39.711: INFO: 
Logging node info for node jerma-node
Feb 14 00:23:39.714: INFO: Node Info: &Node{ObjectMeta:{jerma-node   /api/v1/nodes/jerma-node 6236bfb4-6b64-4c0a-82c6-f768ceeab07c 8270281 0 2020-01-04 11:59:52 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 12:00:49 +0000 UTC,LastTransitionTime:2020-01-04 12:00:49 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-14 00:23:05 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-14 00:23:05 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-14 00:23:05 +0000 UTC,LastTransitionTime:2020-01-04 11:59:52 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-14 00:23:05 +0000 UTC,LastTransitionTime:2020-01-04 12:00:52 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.250,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:bdc16344252549dd902c3a5d68b22f41,SystemUUID:BDC16344-2525-49DD-902C-3A5D68B22F41,BootID:eec61fc4-8bf6-487f-8f93-ea9731fe757a,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:ff02aacd9766d597883fabafc7ad604c719a57611db1bcc1564c69a45b000a55 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.17],SizeBytes:60684726,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:6915be4043561d64e0ab0f8f098dc2ac48e077fe23f488ac24b665166898115a busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Feb 14 00:23:39.715: INFO: 
Logging kubelet events for node jerma-node
Feb 14 00:23:39.719: INFO: 
Logging pods the kubelet thinks is on node jerma-node
Feb 14 00:23:39.747: INFO: frontend-6c5f89d5d4-c4csj started at 2020-02-14 00:20:08 +0000 UTC (0+1 container statuses recorded)
Feb 14 00:23:39.747: INFO: 	Container guestbook-frontend ready: true, restart count 0
Feb 14 00:23:39.747: INFO: frontend-6c5f89d5d4-xglrk started at 2020-02-14 00:20:08 +0000 UTC (0+1 container statuses recorded)
Feb 14 00:23:39.747: INFO: 	Container guestbook-frontend ready: true, restart count 0
Feb 14 00:23:39.747: INFO: agnhost-master-74c46fb7d4-p8vsz started at 2020-02-14 00:20:10 +0000 UTC (0+1 container statuses recorded)
Feb 14 00:23:39.747: INFO: 	Container master ready: true, restart count 0
Feb 14 00:23:39.747: INFO: kube-proxy-dsf66 started at 2020-01-04 11:59:52 +0000 UTC (0+1 container statuses recorded)
Feb 14 00:23:39.747: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 00:23:39.747: INFO: agnhost-slave-774cfc759f-wbznt started at 2020-02-14 00:20:12 +0000 UTC (0+1 container statuses recorded)
Feb 14 00:23:39.747: INFO: 	Container slave ready: true, restart count 0
Feb 14 00:23:39.747: INFO: weave-net-kz8lv started at 2020-01-04 11:59:52 +0000 UTC (0+2 container statuses recorded)
Feb 14 00:23:39.747: INFO: 	Container weave ready: true, restart count 1
Feb 14 00:23:39.747: INFO: 	Container weave-npc ready: true, restart count 0
W0214 00:23:39.838530       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 00:23:40.097: INFO: 
Latency metrics for node jerma-node
Feb 14 00:23:40.097: INFO: 
Logging node info for node jerma-server-mvvl6gufaqub
Feb 14 00:23:40.126: INFO: Node Info: &Node{ObjectMeta:{jerma-server-mvvl6gufaqub   /api/v1/nodes/jerma-server-mvvl6gufaqub a2a7fe9b-7d59-43f1-bbe3-2a69f99cabd2 8269969 0 2020-01-04 11:47:40 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-mvvl6gufaqub kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUseExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2020-01-04 11:48:36 +0000 UTC,LastTransitionTime:2020-01-04 11:48:36 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2020-02-14 00:20:46 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2020-02-14 00:20:46 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2020-02-14 00:20:46 +0000 UTC,LastTransitionTime:2020-01-04 11:47:36 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2020-02-14 00:20:46 +0000 UTC,LastTransitionTime:2020-01-04 11:48:44 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.1.234,},NodeAddress{Type:Hostname,Address:jerma-server-mvvl6gufaqub,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:3f0346566ad342efb0c9f55677d0a8ea,SystemUUID:3F034656-6AD3-42EF-B0C9-F55677D0A8EA,BootID:87dae5d0-e99d-4d31-a4e7-fbd07d84e951,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.17.0,KubeProxyVersion:v1.17.0,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[ollivier/functest-kubernetes-security@sha256:e07875af6d375759fd233dc464382bb51d2464f6ae50a60625e41226eb1f87be ollivier/functest-kubernetes-security:latest],SizeBytes:1118568659,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3-0],SizeBytes:288426917,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:e3ec33d533257902ad9ebe3d399c17710e62009201a7202aec941e351545d662 k8s.gcr.io/kube-apiserver:v1.17.0],SizeBytes:170957331,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:0438efb5098a2ca634ea8c6b0d804742b733d0d13fd53cf62c73e32c659a3c39 k8s.gcr.io/kube-controller-manager:v1.17.0],SizeBytes:160877075,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:b2ba9441af30261465e5c41be63e462d0050b09ad280001ae731f399b2b00b75 k8s.gcr.io/kube-proxy:v1.17.0],SizeBytes:115960823,},ContainerImage{Names:[weaveworks/weave-kube@sha256:e4a3a5b9bf605a7ff5ad5473c7493d7e30cbd1ed14c9c2630a4e409b4dbfab1c weaveworks/weave-kube:2.6.0],SizeBytes:114348932,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:5215c4216a65f7e76c1895ba951a12dc1c947904a91810fc66a544ff1d7e87db k8s.gcr.io/kube-scheduler:v1.17.0],SizeBytes:94431763,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:7ec975f167d815311a7136c32e70735f0d00b73781365df1befd46ed35bd4fe7 k8s.gcr.io/coredns:1.6.5],SizeBytes:41578211,},ContainerImage{Names:[weaveworks/weave-npc@sha256:985de9ff201677a85ce78703c515466fe45c9c73da6ee21821e89d902c21daf8 weaveworks/weave-npc:2.6.0],SizeBytes:34949961,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Feb 14 00:23:40.130: INFO: 
Logging kubelet events for node jerma-server-mvvl6gufaqub
Feb 14 00:23:40.141: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub
Feb 14 00:23:40.179: INFO: coredns-6955765f44-bhnn4 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Feb 14 00:23:40.179: INFO: 	Container coredns ready: true, restart count 0
Feb 14 00:23:40.179: INFO: coredns-6955765f44-bwd85 started at 2020-01-04 11:48:47 +0000 UTC (0+1 container statuses recorded)
Feb 14 00:23:40.179: INFO: 	Container coredns ready: true, restart count 0
Feb 14 00:23:40.179: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Feb 14 00:23:40.179: INFO: 	Container kube-controller-manager ready: true, restart count 7
Feb 14 00:23:40.179: INFO: kube-proxy-chkps started at 2020-01-04 11:48:11 +0000 UTC (0+1 container statuses recorded)
Feb 14 00:23:40.179: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 00:23:40.179: INFO: weave-net-z6tjf started at 2020-01-04 11:48:11 +0000 UTC (0+2 container statuses recorded)
Feb 14 00:23:40.179: INFO: 	Container weave ready: true, restart count 0
Feb 14 00:23:40.179: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 00:23:40.179: INFO: kube-scheduler-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Feb 14 00:23:40.179: INFO: 	Container kube-scheduler ready: true, restart count 11
Feb 14 00:23:40.179: INFO: agnhost-slave-774cfc759f-7286g started at 2020-02-14 00:20:11 +0000 UTC (0+1 container statuses recorded)
Feb 14 00:23:40.179: INFO: 	Container slave ready: true, restart count 0
Feb 14 00:23:40.179: INFO: kube-apiserver-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:53 +0000 UTC (0+1 container statuses recorded)
Feb 14 00:23:40.179: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 14 00:23:40.179: INFO: etcd-jerma-server-mvvl6gufaqub started at 2020-01-04 11:47:54 +0000 UTC (0+1 container statuses recorded)
Feb 14 00:23:40.179: INFO: 	Container etcd ready: true, restart count 1
Feb 14 00:23:40.179: INFO: frontend-6c5f89d5d4-mgxkf started at 2020-02-14 00:20:08 +0000 UTC (0+1 container statuses recorded)
Feb 14 00:23:40.179: INFO: 	Container guestbook-frontend ready: true, restart count 0
W0214 00:23:40.200080       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 00:23:40.264: INFO: 
Latency metrics for node jerma-server-mvvl6gufaqub
Feb 14 00:23:40.264: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7190" for this suite.

• Failure [216.245 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:388
    should create and stop a working application  [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685

    Feb 14 00:23:38.562: Cannot added new entry in 180 seconds.

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339
------------------------------
{"msg":"FAILED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":280,"completed":89,"skipped":1302,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:23:40.284: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0214 00:23:54.201635       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 00:23:54.201: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:23:54.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4424" for this suite.

• [SLOW TEST:14.061 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":280,"completed":90,"skipped":1305,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:23:54.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:24:31.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-49" for this suite.

• [SLOW TEST:37.546 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":280,"completed":91,"skipped":1323,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:24:31.896: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-aea75d9d-46e8-40f3-a858-f908da01c0d9
STEP: Creating a pod to test consume configMaps
Feb 14 00:24:32.073: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a472edb2-26c6-4a0a-bca7-929862f08f5e" in namespace "projected-3474" to be "success or failure"
Feb 14 00:24:32.082: INFO: Pod "pod-projected-configmaps-a472edb2-26c6-4a0a-bca7-929862f08f5e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.41276ms
Feb 14 00:24:34.087: INFO: Pod "pod-projected-configmaps-a472edb2-26c6-4a0a-bca7-929862f08f5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014353884s
Feb 14 00:24:36.095: INFO: Pod "pod-projected-configmaps-a472edb2-26c6-4a0a-bca7-929862f08f5e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022513011s
Feb 14 00:24:38.100: INFO: Pod "pod-projected-configmaps-a472edb2-26c6-4a0a-bca7-929862f08f5e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026852606s
Feb 14 00:24:40.112: INFO: Pod "pod-projected-configmaps-a472edb2-26c6-4a0a-bca7-929862f08f5e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.039021376s
Feb 14 00:24:42.127: INFO: Pod "pod-projected-configmaps-a472edb2-26c6-4a0a-bca7-929862f08f5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.054279843s
STEP: Saw pod success
Feb 14 00:24:42.128: INFO: Pod "pod-projected-configmaps-a472edb2-26c6-4a0a-bca7-929862f08f5e" satisfied condition "success or failure"
Feb 14 00:24:42.135: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-a472edb2-26c6-4a0a-bca7-929862f08f5e container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 00:24:42.224: INFO: Waiting for pod pod-projected-configmaps-a472edb2-26c6-4a0a-bca7-929862f08f5e to disappear
Feb 14 00:24:42.232: INFO: Pod pod-projected-configmaps-a472edb2-26c6-4a0a-bca7-929862f08f5e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:24:42.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3474" for this suite.

• [SLOW TEST:10.348 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":92,"skipped":1355,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:24:42.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 00:24:42.345: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Feb 14 00:24:42.366: INFO: Pod name sample-pod: Found 0 pods out of 1
Feb 14 00:24:47.389: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 14 00:24:51.405: INFO: Creating deployment "test-rolling-update-deployment"
Feb 14 00:24:51.415: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Feb 14 00:24:51.425: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Feb 14 00:24:53.444: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Feb 14 00:24:53.448: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717236691, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717236691, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717236691, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717236691, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:24:55.458: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717236691, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717236691, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717236691, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717236691, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:24:57.454: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717236691, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717236691, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717236691, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717236691, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:24:59.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717236691, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717236691, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717236691, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717236691, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:25:01.458: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 14 00:25:01.470: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-6151 /apis/apps/v1/namespaces/deployment-6151/deployments/test-rolling-update-deployment 80f3bdf5-9b9a-4b2c-8582-b4c13ab3c040 8270824 1 2020-02-14 00:24:51 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000b63498  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-14 00:24:51 +0000 UTC,LastTransitionTime:2020-02-14 00:24:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-02-14 00:25:00 +0000 UTC,LastTransitionTime:2020-02-14 00:24:51 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 14 00:25:01.477: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-6151 /apis/apps/v1/namespaces/deployment-6151/replicasets/test-rolling-update-deployment-67cf4f6444 9050c492-ab88-453e-b215-55d05cc967fe 8270812 1 2020-02-14 00:24:51 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 80f3bdf5-9b9a-4b2c-8582-b4c13ab3c040 0xc000b63977 0xc000b63978}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000b639f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 14 00:25:01.477: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Feb 14 00:25:01.477: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-6151 /apis/apps/v1/namespaces/deployment-6151/replicasets/test-rolling-update-controller cf6f6ab5-7e97-4a0d-a76a-1b0e62e48175 8270822 2 2020-02-14 00:24:42 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 80f3bdf5-9b9a-4b2c-8582-b4c13ab3c040 0xc000b63887 0xc000b63888}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000b638e8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 14 00:25:01.484: INFO: Pod "test-rolling-update-deployment-67cf4f6444-6ngrd" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-6ngrd test-rolling-update-deployment-67cf4f6444- deployment-6151 /api/v1/namespaces/deployment-6151/pods/test-rolling-update-deployment-67cf4f6444-6ngrd 3aa1f10a-49d3-4f3b-9ff6-7c12c8ec81dd 8270811 0 2020-02-14 00:24:51 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 9050c492-ab88-453e-b215-55d05cc967fe 0xc000b63e57 0xc000b63e58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pjm9k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pjm9k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pjm9k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:24:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:25:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:25:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 00:24:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-02-14 00:24:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-14 00:24:59 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://bfe469ca72eac43de57fef2be1fa5091fa19480c6a7516e181a33b86ca4ec81e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:25:01.484: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6151" for this suite.

• [SLOW TEST:19.251 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":93,"skipped":1378,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:25:01.498: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2197.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-2197.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2197.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-2197.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2197.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2197.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2197.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-2197.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2197.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-2197.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2197.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 211.238.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.238.211_udp@PTR;check="$$(dig +tcp +noall +answer +search 211.238.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.238.211_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-2197.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-2197.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-2197.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-2197.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-2197.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-2197.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-2197.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-2197.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-2197.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-2197.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2197.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 211.238.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.238.211_udp@PTR;check="$$(dig +tcp +noall +answer +search 211.238.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.238.211_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 14 00:25:15.924: INFO: Unable to read wheezy_udp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:15.967: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:15.992: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:16.019: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:16.113: INFO: Unable to read jessie_udp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:16.125: INFO: Unable to read jessie_tcp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:16.137: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:16.143: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:16.190: INFO: Lookups using dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d failed for: [wheezy_udp@dns-test-service.dns-2197.svc.cluster.local wheezy_tcp@dns-test-service.dns-2197.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local jessie_udp@dns-test-service.dns-2197.svc.cluster.local jessie_tcp@dns-test-service.dns-2197.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local]

Feb 14 00:25:21.223: INFO: Unable to read wheezy_udp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:21.229: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:21.234: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:21.239: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:21.255: INFO: Unable to read jessie_udp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:21.258: INFO: Unable to read jessie_tcp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:21.261: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:21.263: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:21.279: INFO: Lookups using dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d failed for: [wheezy_udp@dns-test-service.dns-2197.svc.cluster.local wheezy_tcp@dns-test-service.dns-2197.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local jessie_udp@dns-test-service.dns-2197.svc.cluster.local jessie_tcp@dns-test-service.dns-2197.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local]

Feb 14 00:25:26.199: INFO: Unable to read wheezy_udp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:26.203: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:26.208: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:26.213: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:26.243: INFO: Unable to read jessie_udp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:26.248: INFO: Unable to read jessie_tcp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:26.251: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:26.255: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:26.278: INFO: Lookups using dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d failed for: [wheezy_udp@dns-test-service.dns-2197.svc.cluster.local wheezy_tcp@dns-test-service.dns-2197.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local jessie_udp@dns-test-service.dns-2197.svc.cluster.local jessie_tcp@dns-test-service.dns-2197.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local]

Feb 14 00:25:31.202: INFO: Unable to read wheezy_udp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:31.209: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:31.213: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:31.216: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:31.238: INFO: Unable to read jessie_udp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:31.242: INFO: Unable to read jessie_tcp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:31.245: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:31.247: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:31.269: INFO: Lookups using dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d failed for: [wheezy_udp@dns-test-service.dns-2197.svc.cluster.local wheezy_tcp@dns-test-service.dns-2197.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local jessie_udp@dns-test-service.dns-2197.svc.cluster.local jessie_tcp@dns-test-service.dns-2197.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local]

Feb 14 00:25:36.200: INFO: Unable to read wheezy_udp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:36.207: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:36.215: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:36.221: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:36.303: INFO: Unable to read jessie_udp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:36.309: INFO: Unable to read jessie_tcp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:36.313: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:36.317: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:36.349: INFO: Lookups using dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d failed for: [wheezy_udp@dns-test-service.dns-2197.svc.cluster.local wheezy_tcp@dns-test-service.dns-2197.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local jessie_udp@dns-test-service.dns-2197.svc.cluster.local jessie_tcp@dns-test-service.dns-2197.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local]

Feb 14 00:25:41.199: INFO: Unable to read wheezy_udp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:41.208: INFO: Unable to read wheezy_tcp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:41.212: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:41.220: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:41.262: INFO: Unable to read jessie_udp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:41.267: INFO: Unable to read jessie_tcp@dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:41.271: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:41.275: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local from pod dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d: the server could not find the requested resource (get pods dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d)
Feb 14 00:25:41.297: INFO: Lookups using dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d failed for: [wheezy_udp@dns-test-service.dns-2197.svc.cluster.local wheezy_tcp@dns-test-service.dns-2197.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local jessie_udp@dns-test-service.dns-2197.svc.cluster.local jessie_tcp@dns-test-service.dns-2197.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-2197.svc.cluster.local]

Feb 14 00:25:46.346: INFO: DNS probes using dns-2197/dns-test-3424565a-739a-4743-9387-b71eeb2f3e1d succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:25:46.821: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2197" for this suite.

• [SLOW TEST:45.369 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":280,"completed":94,"skipped":1404,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:25:46.870: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-5379
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 14 00:25:47.060: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 14 00:25:47.277: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:25:49.377: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:25:51.283: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:25:54.519: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:25:55.303: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:25:57.476: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:25:59.284: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 00:26:01.286: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 00:26:03.286: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 00:26:05.284: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 00:26:07.285: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 00:26:09.285: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 00:26:11.288: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 00:26:13.285: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 00:26:15.286: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 14 00:26:15.294: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 14 00:26:25.350: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5379 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 00:26:25.351: INFO: >>> kubeConfig: /root/.kube/config
I0214 00:26:25.420158       9 log.go:172] (0xc002b85550) (0xc002972460) Create stream
I0214 00:26:25.420405       9 log.go:172] (0xc002b85550) (0xc002972460) Stream added, broadcasting: 1
I0214 00:26:25.429365       9 log.go:172] (0xc002b85550) Reply frame received for 1
I0214 00:26:25.429491       9 log.go:172] (0xc002b85550) (0xc002e3c000) Create stream
I0214 00:26:25.429526       9 log.go:172] (0xc002b85550) (0xc002e3c000) Stream added, broadcasting: 3
I0214 00:26:25.432917       9 log.go:172] (0xc002b85550) Reply frame received for 3
I0214 00:26:25.432962       9 log.go:172] (0xc002b85550) (0xc002972500) Create stream
I0214 00:26:25.432986       9 log.go:172] (0xc002b85550) (0xc002972500) Stream added, broadcasting: 5
I0214 00:26:25.435583       9 log.go:172] (0xc002b85550) Reply frame received for 5
I0214 00:26:26.569980       9 log.go:172] (0xc002b85550) Data frame received for 3
I0214 00:26:26.570275       9 log.go:172] (0xc002e3c000) (3) Data frame handling
I0214 00:26:26.570346       9 log.go:172] (0xc002e3c000) (3) Data frame sent
I0214 00:26:26.724074       9 log.go:172] (0xc002b85550) (0xc002e3c000) Stream removed, broadcasting: 3
I0214 00:26:26.724583       9 log.go:172] (0xc002b85550) Data frame received for 1
I0214 00:26:26.724729       9 log.go:172] (0xc002b85550) (0xc002972500) Stream removed, broadcasting: 5
I0214 00:26:26.724885       9 log.go:172] (0xc002972460) (1) Data frame handling
I0214 00:26:26.724937       9 log.go:172] (0xc002972460) (1) Data frame sent
I0214 00:26:26.724954       9 log.go:172] (0xc002b85550) (0xc002972460) Stream removed, broadcasting: 1
I0214 00:26:26.724984       9 log.go:172] (0xc002b85550) Go away received
I0214 00:26:26.725692       9 log.go:172] (0xc002b85550) (0xc002972460) Stream removed, broadcasting: 1
I0214 00:26:26.725710       9 log.go:172] (0xc002b85550) (0xc002e3c000) Stream removed, broadcasting: 3
I0214 00:26:26.725722       9 log.go:172] (0xc002b85550) (0xc002972500) Stream removed, broadcasting: 5
Feb 14 00:26:26.725: INFO: Found all expected endpoints: [netserver-0]
Feb 14 00:26:26.736: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5379 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 00:26:26.736: INFO: >>> kubeConfig: /root/.kube/config
I0214 00:26:27.181535       9 log.go:172] (0xc002e84000) (0xc0024a4280) Create stream
I0214 00:26:27.181619       9 log.go:172] (0xc002e84000) (0xc0024a4280) Stream added, broadcasting: 1
I0214 00:26:27.186441       9 log.go:172] (0xc002e84000) Reply frame received for 1
I0214 00:26:27.186467       9 log.go:172] (0xc002e84000) (0xc000933f40) Create stream
I0214 00:26:27.186475       9 log.go:172] (0xc002e84000) (0xc000933f40) Stream added, broadcasting: 3
I0214 00:26:27.187453       9 log.go:172] (0xc002e84000) Reply frame received for 3
I0214 00:26:27.187489       9 log.go:172] (0xc002e84000) (0xc0024a4320) Create stream
I0214 00:26:27.187507       9 log.go:172] (0xc002e84000) (0xc0024a4320) Stream added, broadcasting: 5
I0214 00:26:27.192558       9 log.go:172] (0xc002e84000) Reply frame received for 5
I0214 00:26:28.269013       9 log.go:172] (0xc002e84000) Data frame received for 3
I0214 00:26:28.269095       9 log.go:172] (0xc000933f40) (3) Data frame handling
I0214 00:26:28.269114       9 log.go:172] (0xc000933f40) (3) Data frame sent
I0214 00:26:28.351457       9 log.go:172] (0xc002e84000) (0xc000933f40) Stream removed, broadcasting: 3
I0214 00:26:28.351879       9 log.go:172] (0xc002e84000) (0xc0024a4320) Stream removed, broadcasting: 5
I0214 00:26:28.352113       9 log.go:172] (0xc002e84000) Data frame received for 1
I0214 00:26:28.352403       9 log.go:172] (0xc0024a4280) (1) Data frame handling
I0214 00:26:28.352536       9 log.go:172] (0xc0024a4280) (1) Data frame sent
I0214 00:26:28.352634       9 log.go:172] (0xc002e84000) (0xc0024a4280) Stream removed, broadcasting: 1
I0214 00:26:28.352768       9 log.go:172] (0xc002e84000) Go away received
I0214 00:26:28.353204       9 log.go:172] (0xc002e84000) (0xc0024a4280) Stream removed, broadcasting: 1
I0214 00:26:28.353231       9 log.go:172] (0xc002e84000) (0xc000933f40) Stream removed, broadcasting: 3
I0214 00:26:28.353256       9 log.go:172] (0xc002e84000) (0xc0024a4320) Stream removed, broadcasting: 5
Feb 14 00:26:28.353: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:26:28.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5379" for this suite.

• [SLOW TEST:41.501 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":95,"skipped":1406,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:26:28.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Feb 14 00:26:28.482: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:26:52.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-2084" for this suite.

• [SLOW TEST:23.996 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":280,"completed":96,"skipped":1413,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:26:52.369: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 00:26:52.431: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 14 00:26:56.013: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3762 create -f -'
Feb 14 00:26:58.844: INFO: stderr: ""
Feb 14 00:26:58.844: INFO: stdout: "e2e-test-crd-publish-openapi-5206-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb 14 00:26:58.845: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3762 delete e2e-test-crd-publish-openapi-5206-crds test-cr'
Feb 14 00:26:58.976: INFO: stderr: ""
Feb 14 00:26:58.976: INFO: stdout: "e2e-test-crd-publish-openapi-5206-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Feb 14 00:26:58.977: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3762 apply -f -'
Feb 14 00:26:59.523: INFO: stderr: ""
Feb 14 00:26:59.523: INFO: stdout: "e2e-test-crd-publish-openapi-5206-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Feb 14 00:26:59.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3762 delete e2e-test-crd-publish-openapi-5206-crds test-cr'
Feb 14 00:26:59.625: INFO: stderr: ""
Feb 14 00:26:59.625: INFO: stdout: "e2e-test-crd-publish-openapi-5206-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Feb 14 00:26:59.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5206-crds'
Feb 14 00:26:59.895: INFO: stderr: ""
Feb 14 00:26:59.895: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-5206-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:27:02.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3762" for this suite.

• [SLOW TEST:9.736 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":280,"completed":97,"skipped":1423,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-network] DNS 
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:27:02.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7601.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7601.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 14 00:27:14.281: INFO: DNS probes using dns-7601/dns-test-e6af340d-bfa5-4265-b7ab-c30db8091648 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:27:14.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7601" for this suite.

• [SLOW TEST:12.384 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for the cluster  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for the cluster  [Conformance]","total":280,"completed":98,"skipped":1423,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:27:14.491: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service multi-endpoint-test in namespace services-7319
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7319 to expose endpoints map[]
Feb 14 00:27:14.839: INFO: Get endpoints failed (172.627707ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Feb 14 00:27:15.853: INFO: successfully validated that service multi-endpoint-test in namespace services-7319 exposes endpoints map[] (1.185721001s elapsed)
STEP: Creating pod pod1 in namespace services-7319
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7319 to expose endpoints map[pod1:[100]]
Feb 14 00:27:19.977: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.103151005s elapsed, will retry)
Feb 14 00:27:25.044: INFO: successfully validated that service multi-endpoint-test in namespace services-7319 exposes endpoints map[pod1:[100]] (9.169696285s elapsed)
STEP: Creating pod pod2 in namespace services-7319
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7319 to expose endpoints map[pod1:[100] pod2:[101]]
Feb 14 00:27:30.563: INFO: Unexpected endpoints: found map[cdcac0e8-5b01-4017-b875-80731300ffd9:[100]], expected map[pod1:[100] pod2:[101]] (5.501992418s elapsed, will retry)
Feb 14 00:27:31.622: INFO: successfully validated that service multi-endpoint-test in namespace services-7319 exposes endpoints map[pod1:[100] pod2:[101]] (6.560528859s elapsed)
STEP: Deleting pod pod1 in namespace services-7319
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7319 to expose endpoints map[pod2:[101]]
Feb 14 00:27:31.700: INFO: successfully validated that service multi-endpoint-test in namespace services-7319 exposes endpoints map[pod2:[101]] (71.186631ms elapsed)
STEP: Deleting pod pod2 in namespace services-7319
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-7319 to expose endpoints map[]
Feb 14 00:27:32.835: INFO: successfully validated that service multi-endpoint-test in namespace services-7319 exposes endpoints map[] (1.035341876s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:27:32.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7319" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:18.530 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":280,"completed":99,"skipped":1431,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:27:33.023: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Feb 14 00:27:33.370: INFO: Number of nodes with available pods: 0
Feb 14 00:27:33.370: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:35.458: INFO: Number of nodes with available pods: 0
Feb 14 00:27:35.458: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:36.392: INFO: Number of nodes with available pods: 0
Feb 14 00:27:36.392: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:37.391: INFO: Number of nodes with available pods: 0
Feb 14 00:27:37.392: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:39.074: INFO: Number of nodes with available pods: 0
Feb 14 00:27:39.074: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:39.566: INFO: Number of nodes with available pods: 0
Feb 14 00:27:39.566: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:40.383: INFO: Number of nodes with available pods: 0
Feb 14 00:27:40.383: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:42.806: INFO: Number of nodes with available pods: 0
Feb 14 00:27:42.806: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:43.556: INFO: Number of nodes with available pods: 0
Feb 14 00:27:43.556: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:44.397: INFO: Number of nodes with available pods: 0
Feb 14 00:27:44.397: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:45.452: INFO: Number of nodes with available pods: 2
Feb 14 00:27:45.452: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Feb 14 00:27:45.653: INFO: Number of nodes with available pods: 1
Feb 14 00:27:45.654: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:46.666: INFO: Number of nodes with available pods: 1
Feb 14 00:27:46.667: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:47.667: INFO: Number of nodes with available pods: 1
Feb 14 00:27:47.667: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:48.670: INFO: Number of nodes with available pods: 1
Feb 14 00:27:48.671: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:49.667: INFO: Number of nodes with available pods: 1
Feb 14 00:27:49.667: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:50.666: INFO: Number of nodes with available pods: 1
Feb 14 00:27:50.666: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:51.664: INFO: Number of nodes with available pods: 1
Feb 14 00:27:51.664: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:52.675: INFO: Number of nodes with available pods: 1
Feb 14 00:27:52.675: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:53.671: INFO: Number of nodes with available pods: 1
Feb 14 00:27:53.671: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:27:54.665: INFO: Number of nodes with available pods: 2
Feb 14 00:27:54.665: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6540, will wait for the garbage collector to delete the pods
Feb 14 00:27:54.771: INFO: Deleting DaemonSet.extensions daemon-set took: 11.785438ms
Feb 14 00:27:55.071: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.505794ms
Feb 14 00:28:13.077: INFO: Number of nodes with available pods: 0
Feb 14 00:28:13.078: INFO: Number of running nodes: 0, number of available pods: 0
Feb 14 00:28:13.085: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6540/daemonsets","resourceVersion":"8271593"},"items":null}

Feb 14 00:28:13.088: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6540/pods","resourceVersion":"8271593"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:28:13.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-6540" for this suite.

• [SLOW TEST:40.083 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":280,"completed":100,"skipped":1456,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:28:13.107: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Feb 14 00:28:13.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8900'
Feb 14 00:28:14.058: INFO: stderr: ""
Feb 14 00:28:14.059: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 00:28:14.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8900'
Feb 14 00:28:14.283: INFO: stderr: ""
Feb 14 00:28:14.284: INFO: stdout: "update-demo-nautilus-7nmfd update-demo-nautilus-cpjnw "
Feb 14 00:28:14.284: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7nmfd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:14.399: INFO: stderr: ""
Feb 14 00:28:14.399: INFO: stdout: ""
Feb 14 00:28:14.399: INFO: update-demo-nautilus-7nmfd is created but not running
Feb 14 00:28:19.400: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8900'
Feb 14 00:28:19.580: INFO: stderr: ""
Feb 14 00:28:19.580: INFO: stdout: "update-demo-nautilus-7nmfd update-demo-nautilus-cpjnw "
Feb 14 00:28:19.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7nmfd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:19.861: INFO: stderr: ""
Feb 14 00:28:19.861: INFO: stdout: ""
Feb 14 00:28:19.861: INFO: update-demo-nautilus-7nmfd is created but not running
Feb 14 00:28:24.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8900'
Feb 14 00:28:25.085: INFO: stderr: ""
Feb 14 00:28:25.085: INFO: stdout: "update-demo-nautilus-7nmfd update-demo-nautilus-cpjnw "
Feb 14 00:28:25.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7nmfd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:25.189: INFO: stderr: ""
Feb 14 00:28:25.189: INFO: stdout: ""
Feb 14 00:28:25.189: INFO: update-demo-nautilus-7nmfd is created but not running
Feb 14 00:28:30.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8900'
Feb 14 00:28:30.381: INFO: stderr: ""
Feb 14 00:28:30.381: INFO: stdout: "update-demo-nautilus-7nmfd update-demo-nautilus-cpjnw "
Feb 14 00:28:30.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7nmfd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:30.471: INFO: stderr: ""
Feb 14 00:28:30.471: INFO: stdout: "true"
Feb 14 00:28:30.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-7nmfd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:30.580: INFO: stderr: ""
Feb 14 00:28:30.581: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 00:28:30.581: INFO: validating pod update-demo-nautilus-7nmfd
Feb 14 00:28:30.604: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 00:28:30.605: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 00:28:30.605: INFO: update-demo-nautilus-7nmfd is verified up and running
Feb 14 00:28:30.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpjnw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:30.755: INFO: stderr: ""
Feb 14 00:28:30.755: INFO: stdout: "true"
Feb 14 00:28:30.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpjnw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:30.893: INFO: stderr: ""
Feb 14 00:28:30.893: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 00:28:30.893: INFO: validating pod update-demo-nautilus-cpjnw
Feb 14 00:28:30.909: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 00:28:30.910: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 00:28:30.910: INFO: update-demo-nautilus-cpjnw is verified up and running
STEP: scaling down the replication controller
Feb 14 00:28:30.913: INFO: scanned /root for discovery docs: 
Feb 14 00:28:30.914: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-8900'
Feb 14 00:28:32.110: INFO: stderr: ""
Feb 14 00:28:32.110: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 00:28:32.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8900'
Feb 14 00:28:32.362: INFO: stderr: ""
Feb 14 00:28:32.363: INFO: stdout: "update-demo-nautilus-7nmfd update-demo-nautilus-cpjnw "
STEP: Replicas for name=update-demo: expected=1 actual=2
Feb 14 00:28:37.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8900'
Feb 14 00:28:37.702: INFO: stderr: ""
Feb 14 00:28:37.702: INFO: stdout: "update-demo-nautilus-cpjnw "
Feb 14 00:28:37.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpjnw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:38.314: INFO: stderr: ""
Feb 14 00:28:38.315: INFO: stdout: "true"
Feb 14 00:28:38.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpjnw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:38.503: INFO: stderr: ""
Feb 14 00:28:38.503: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 00:28:38.503: INFO: validating pod update-demo-nautilus-cpjnw
Feb 14 00:28:38.516: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 00:28:38.516: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 00:28:38.516: INFO: update-demo-nautilus-cpjnw is verified up and running
STEP: scaling up the replication controller
Feb 14 00:28:38.524: INFO: scanned /root for discovery docs: 
Feb 14 00:28:38.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-8900'
Feb 14 00:28:40.358: INFO: stderr: ""
Feb 14 00:28:40.359: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 00:28:40.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8900'
Feb 14 00:28:41.169: INFO: stderr: ""
Feb 14 00:28:41.169: INFO: stdout: "update-demo-nautilus-cpjnw update-demo-nautilus-mmsqj "
Feb 14 00:28:41.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpjnw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:41.356: INFO: stderr: ""
Feb 14 00:28:41.356: INFO: stdout: "true"
Feb 14 00:28:41.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpjnw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:41.432: INFO: stderr: ""
Feb 14 00:28:41.432: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 00:28:41.433: INFO: validating pod update-demo-nautilus-cpjnw
Feb 14 00:28:41.439: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 00:28:41.439: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 00:28:41.439: INFO: update-demo-nautilus-cpjnw is verified up and running
Feb 14 00:28:41.439: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mmsqj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:41.519: INFO: stderr: ""
Feb 14 00:28:41.519: INFO: stdout: ""
Feb 14 00:28:41.519: INFO: update-demo-nautilus-mmsqj is created but not running
Feb 14 00:28:46.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8900'
Feb 14 00:28:46.657: INFO: stderr: ""
Feb 14 00:28:46.658: INFO: stdout: "update-demo-nautilus-cpjnw update-demo-nautilus-mmsqj "
Feb 14 00:28:46.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpjnw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:47.228: INFO: stderr: ""
Feb 14 00:28:47.229: INFO: stdout: "true"
Feb 14 00:28:47.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpjnw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:47.412: INFO: stderr: ""
Feb 14 00:28:47.412: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 00:28:47.412: INFO: validating pod update-demo-nautilus-cpjnw
Feb 14 00:28:47.424: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 00:28:47.424: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 00:28:47.424: INFO: update-demo-nautilus-cpjnw is verified up and running
Feb 14 00:28:47.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mmsqj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:47.542: INFO: stderr: ""
Feb 14 00:28:47.542: INFO: stdout: ""
Feb 14 00:28:47.542: INFO: update-demo-nautilus-mmsqj is created but not running
Feb 14 00:28:52.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8900'
Feb 14 00:28:52.651: INFO: stderr: ""
Feb 14 00:28:52.651: INFO: stdout: "update-demo-nautilus-cpjnw update-demo-nautilus-mmsqj "
Feb 14 00:28:52.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpjnw -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:52.788: INFO: stderr: ""
Feb 14 00:28:52.788: INFO: stdout: "true"
Feb 14 00:28:52.788: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cpjnw -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:52.930: INFO: stderr: ""
Feb 14 00:28:52.930: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 00:28:52.930: INFO: validating pod update-demo-nautilus-cpjnw
Feb 14 00:28:52.936: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 00:28:52.936: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 00:28:52.936: INFO: update-demo-nautilus-cpjnw is verified up and running
Feb 14 00:28:52.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mmsqj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:53.062: INFO: stderr: ""
Feb 14 00:28:53.062: INFO: stdout: "true"
Feb 14 00:28:53.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-mmsqj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8900'
Feb 14 00:28:53.158: INFO: stderr: ""
Feb 14 00:28:53.158: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 00:28:53.158: INFO: validating pod update-demo-nautilus-mmsqj
Feb 14 00:28:53.162: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 00:28:53.162: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 00:28:53.162: INFO: update-demo-nautilus-mmsqj is verified up and running
STEP: using delete to clean up resources
Feb 14 00:28:53.162: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8900'
Feb 14 00:28:53.280: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 00:28:53.280: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 14 00:28:53.280: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8900'
Feb 14 00:28:53.415: INFO: stderr: "No resources found in kubectl-8900 namespace.\n"
Feb 14 00:28:53.415: INFO: stdout: ""
Feb 14 00:28:53.415: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8900 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 14 00:28:53.498: INFO: stderr: ""
Feb 14 00:28:53.498: INFO: stdout: "update-demo-nautilus-cpjnw\nupdate-demo-nautilus-mmsqj\n"
Feb 14 00:28:53.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8900'
Feb 14 00:28:54.250: INFO: stderr: "No resources found in kubectl-8900 namespace.\n"
Feb 14 00:28:54.250: INFO: stdout: ""
Feb 14 00:28:54.251: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8900 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 14 00:28:54.381: INFO: stderr: ""
Feb 14 00:28:54.381: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:28:54.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8900" for this suite.

• [SLOW TEST:41.286 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":280,"completed":101,"skipped":1472,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:28:54.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-0adc3c76-d301-4a22-96cd-25c41891fff2
STEP: Creating configMap with name cm-test-opt-upd-29f856b6-b66d-4de3-8246-7d6ab1f84287
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-0adc3c76-d301-4a22-96cd-25c41891fff2
STEP: Updating configmap cm-test-opt-upd-29f856b6-b66d-4de3-8246-7d6ab1f84287
STEP: Creating configMap with name cm-test-opt-create-f722ad10-5187-4db8-9fd9-2123580b4af9
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:30:41.682: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5890" for this suite.

• [SLOW TEST:107.307 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":102,"skipped":1487,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:30:41.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 14 00:30:41.855: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16819b16-f5ea-451b-870d-facda495d009" in namespace "projected-7292" to be "success or failure"
Feb 14 00:30:41.874: INFO: Pod "downwardapi-volume-16819b16-f5ea-451b-870d-facda495d009": Phase="Pending", Reason="", readiness=false. Elapsed: 18.65998ms
Feb 14 00:30:43.887: INFO: Pod "downwardapi-volume-16819b16-f5ea-451b-870d-facda495d009": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03168269s
Feb 14 00:30:45.897: INFO: Pod "downwardapi-volume-16819b16-f5ea-451b-870d-facda495d009": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041725209s
Feb 14 00:30:47.909: INFO: Pod "downwardapi-volume-16819b16-f5ea-451b-870d-facda495d009": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053529319s
Feb 14 00:30:49.918: INFO: Pod "downwardapi-volume-16819b16-f5ea-451b-870d-facda495d009": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063094944s
Feb 14 00:30:51.929: INFO: Pod "downwardapi-volume-16819b16-f5ea-451b-870d-facda495d009": Phase="Pending", Reason="", readiness=false. Elapsed: 10.073586533s
Feb 14 00:30:53.943: INFO: Pod "downwardapi-volume-16819b16-f5ea-451b-870d-facda495d009": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.087667591s
STEP: Saw pod success
Feb 14 00:30:53.943: INFO: Pod "downwardapi-volume-16819b16-f5ea-451b-870d-facda495d009" satisfied condition "success or failure"
Feb 14 00:30:53.951: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-16819b16-f5ea-451b-870d-facda495d009 container client-container: 
STEP: delete the pod
Feb 14 00:30:54.080: INFO: Waiting for pod downwardapi-volume-16819b16-f5ea-451b-870d-facda495d009 to disappear
Feb 14 00:30:54.087: INFO: Pod downwardapi-volume-16819b16-f5ea-451b-870d-facda495d009 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:30:54.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7292" for this suite.

• [SLOW TEST:12.409 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":103,"skipped":1487,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:30:54.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Feb 14 00:30:54.231: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1420 /api/v1/namespaces/watch-1420/configmaps/e2e-watch-test-configmap-a a4df8a2b-801e-4465-8511-c2d65634f6aa 8272116 0 2020-02-14 00:30:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 14 00:30:54.231: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1420 /api/v1/namespaces/watch-1420/configmaps/e2e-watch-test-configmap-a a4df8a2b-801e-4465-8511-c2d65634f6aa 8272116 0 2020-02-14 00:30:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Feb 14 00:31:04.253: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1420 /api/v1/namespaces/watch-1420/configmaps/e2e-watch-test-configmap-a a4df8a2b-801e-4465-8511-c2d65634f6aa 8272151 0 2020-02-14 00:30:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 14 00:31:04.254: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1420 /api/v1/namespaces/watch-1420/configmaps/e2e-watch-test-configmap-a a4df8a2b-801e-4465-8511-c2d65634f6aa 8272151 0 2020-02-14 00:30:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Feb 14 00:31:14.351: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1420 /api/v1/namespaces/watch-1420/configmaps/e2e-watch-test-configmap-a a4df8a2b-801e-4465-8511-c2d65634f6aa 8272177 0 2020-02-14 00:30:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 14 00:31:14.352: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1420 /api/v1/namespaces/watch-1420/configmaps/e2e-watch-test-configmap-a a4df8a2b-801e-4465-8511-c2d65634f6aa 8272177 0 2020-02-14 00:30:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Feb 14 00:31:24.363: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1420 /api/v1/namespaces/watch-1420/configmaps/e2e-watch-test-configmap-a a4df8a2b-801e-4465-8511-c2d65634f6aa 8272199 0 2020-02-14 00:30:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 14 00:31:24.363: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-1420 /api/v1/namespaces/watch-1420/configmaps/e2e-watch-test-configmap-a a4df8a2b-801e-4465-8511-c2d65634f6aa 8272199 0 2020-02-14 00:30:54 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Feb 14 00:31:34.378: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1420 /api/v1/namespaces/watch-1420/configmaps/e2e-watch-test-configmap-b e59df2d1-b998-4181-89df-a10cd5c4ccf3 8272223 0 2020-02-14 00:31:34 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 14 00:31:34.378: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1420 /api/v1/namespaces/watch-1420/configmaps/e2e-watch-test-configmap-b e59df2d1-b998-4181-89df-a10cd5c4ccf3 8272223 0 2020-02-14 00:31:34 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Feb 14 00:31:44.388: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1420 /api/v1/namespaces/watch-1420/configmaps/e2e-watch-test-configmap-b e59df2d1-b998-4181-89df-a10cd5c4ccf3 8272247 0 2020-02-14 00:31:34 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 14 00:31:44.389: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-1420 /api/v1/namespaces/watch-1420/configmaps/e2e-watch-test-configmap-b e59df2d1-b998-4181-89df-a10cd5c4ccf3 8272247 0 2020-02-14 00:31:34 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:31:54.390: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-1420" for this suite.

• [SLOW TEST:60.298 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":280,"completed":104,"skipped":1490,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:31:54.412: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-caf25af3-0531-4cdc-9ac8-91380e2b9224
STEP: Creating a pod to test consume secrets
Feb 14 00:31:54.543: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-20bfe25b-f626-412c-93e7-8eb21569bc93" in namespace "projected-4077" to be "success or failure"
Feb 14 00:31:54.562: INFO: Pod "pod-projected-secrets-20bfe25b-f626-412c-93e7-8eb21569bc93": Phase="Pending", Reason="", readiness=false. Elapsed: 18.516337ms
Feb 14 00:31:56.577: INFO: Pod "pod-projected-secrets-20bfe25b-f626-412c-93e7-8eb21569bc93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033707395s
Feb 14 00:31:58.594: INFO: Pod "pod-projected-secrets-20bfe25b-f626-412c-93e7-8eb21569bc93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050267314s
Feb 14 00:32:00.605: INFO: Pod "pod-projected-secrets-20bfe25b-f626-412c-93e7-8eb21569bc93": Phase="Pending", Reason="", readiness=false. Elapsed: 6.061992184s
Feb 14 00:32:02.621: INFO: Pod "pod-projected-secrets-20bfe25b-f626-412c-93e7-8eb21569bc93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.077711971s
STEP: Saw pod success
Feb 14 00:32:02.621: INFO: Pod "pod-projected-secrets-20bfe25b-f626-412c-93e7-8eb21569bc93" satisfied condition "success or failure"
Feb 14 00:32:02.626: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-20bfe25b-f626-412c-93e7-8eb21569bc93 container projected-secret-volume-test: 
STEP: delete the pod
Feb 14 00:32:02.691: INFO: Waiting for pod pod-projected-secrets-20bfe25b-f626-412c-93e7-8eb21569bc93 to disappear
Feb 14 00:32:02.712: INFO: Pod pod-projected-secrets-20bfe25b-f626-412c-93e7-8eb21569bc93 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:32:02.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4077" for this suite.

• [SLOW TEST:8.315 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":105,"skipped":1509,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:32:02.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 00:32:02.923: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:32:04.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7839" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":280,"completed":106,"skipped":1536,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:32:04.061: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on node default medium
Feb 14 00:32:04.244: INFO: Waiting up to 5m0s for pod "pod-6174a004-5497-4166-bf7f-cffcac444b3e" in namespace "emptydir-3272" to be "success or failure"
Feb 14 00:32:04.254: INFO: Pod "pod-6174a004-5497-4166-bf7f-cffcac444b3e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.103027ms
Feb 14 00:32:06.263: INFO: Pod "pod-6174a004-5497-4166-bf7f-cffcac444b3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018991819s
Feb 14 00:32:08.287: INFO: Pod "pod-6174a004-5497-4166-bf7f-cffcac444b3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042525328s
Feb 14 00:32:10.297: INFO: Pod "pod-6174a004-5497-4166-bf7f-cffcac444b3e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052650045s
Feb 14 00:32:12.319: INFO: Pod "pod-6174a004-5497-4166-bf7f-cffcac444b3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074361047s
STEP: Saw pod success
Feb 14 00:32:12.319: INFO: Pod "pod-6174a004-5497-4166-bf7f-cffcac444b3e" satisfied condition "success or failure"
Feb 14 00:32:12.324: INFO: Trying to get logs from node jerma-node pod pod-6174a004-5497-4166-bf7f-cffcac444b3e container test-container: 
STEP: delete the pod
Feb 14 00:32:12.377: INFO: Waiting for pod pod-6174a004-5497-4166-bf7f-cffcac444b3e to disappear
Feb 14 00:32:12.384: INFO: Pod pod-6174a004-5497-4166-bf7f-cffcac444b3e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:32:12.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3272" for this suite.

• [SLOW TEST:8.339 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":107,"skipped":1555,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:32:12.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 14 00:32:28.665: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 00:32:28.695: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 00:32:30.695: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 00:32:30.702: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 00:32:32.695: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 00:32:32.702: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 00:32:34.696: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 00:32:34.708: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 00:32:36.695: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 00:32:36.702: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 00:32:38.695: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 00:32:38.701: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 00:32:40.695: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 00:32:40.703: INFO: Pod pod-with-poststart-exec-hook still exists
Feb 14 00:32:42.695: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Feb 14 00:32:42.704: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:32:42.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3114" for this suite.

• [SLOW TEST:30.315 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":280,"completed":108,"skipped":1588,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:32:42.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Feb 14 00:32:42.843: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-850 /api/v1/namespaces/watch-850/configmaps/e2e-watch-test-label-changed 6ef12cc1-0734-4a24-a17b-2d83ebf20872 8272482 0 2020-02-14 00:32:42 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 14 00:32:42.844: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-850 /api/v1/namespaces/watch-850/configmaps/e2e-watch-test-label-changed 6ef12cc1-0734-4a24-a17b-2d83ebf20872 8272483 0 2020-02-14 00:32:42 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 14 00:32:42.844: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-850 /api/v1/namespaces/watch-850/configmaps/e2e-watch-test-label-changed 6ef12cc1-0734-4a24-a17b-2d83ebf20872 8272484 0 2020-02-14 00:32:42 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Feb 14 00:32:52.890: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-850 /api/v1/namespaces/watch-850/configmaps/e2e-watch-test-label-changed 6ef12cc1-0734-4a24-a17b-2d83ebf20872 8272525 0 2020-02-14 00:32:42 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 14 00:32:52.891: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-850 /api/v1/namespaces/watch-850/configmaps/e2e-watch-test-label-changed 6ef12cc1-0734-4a24-a17b-2d83ebf20872 8272526 0 2020-02-14 00:32:42 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 14 00:32:52.891: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-850 /api/v1/namespaces/watch-850/configmaps/e2e-watch-test-label-changed 6ef12cc1-0734-4a24-a17b-2d83ebf20872 8272527 0 2020-02-14 00:32:42 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:32:52.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-850" for this suite.

• [SLOW TEST:10.185 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":280,"completed":109,"skipped":1607,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:32:52.905: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating Agnhost RC
Feb 14 00:32:53.074: INFO: namespace kubectl-9028
Feb 14 00:32:53.074: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9028'
Feb 14 00:32:53.685: INFO: stderr: ""
Feb 14 00:32:53.686: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 14 00:32:54.704: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 00:32:54.704: INFO: Found 0 / 1
Feb 14 00:32:55.703: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 00:32:55.704: INFO: Found 0 / 1
Feb 14 00:32:56.792: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 00:32:56.793: INFO: Found 0 / 1
Feb 14 00:32:57.729: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 00:32:57.730: INFO: Found 0 / 1
Feb 14 00:32:58.736: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 00:32:58.737: INFO: Found 0 / 1
Feb 14 00:32:59.699: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 00:32:59.700: INFO: Found 0 / 1
Feb 14 00:33:00.696: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 00:33:00.696: INFO: Found 0 / 1
Feb 14 00:33:01.695: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 00:33:01.696: INFO: Found 1 / 1
Feb 14 00:33:01.696: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 14 00:33:01.700: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 00:33:01.700: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 14 00:33:01.700: INFO: wait on agnhost-master startup in kubectl-9028 
Feb 14 00:33:01.701: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-mt2cc agnhost-master --namespace=kubectl-9028'
Feb 14 00:33:01.978: INFO: stderr: ""
Feb 14 00:33:01.978: INFO: stdout: "Paused\n"
STEP: exposing RC
Feb 14 00:33:01.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-9028'
Feb 14 00:33:02.200: INFO: stderr: ""
Feb 14 00:33:02.200: INFO: stdout: "service/rm2 exposed\n"
Feb 14 00:33:02.207: INFO: Service rm2 in namespace kubectl-9028 found.
STEP: exposing service
Feb 14 00:33:04.217: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-9028'
Feb 14 00:33:04.372: INFO: stderr: ""
Feb 14 00:33:04.373: INFO: stdout: "service/rm3 exposed\n"
Feb 14 00:33:04.379: INFO: Service rm3 in namespace kubectl-9028 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:33:06.393: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9028" for this suite.

• [SLOW TEST:13.501 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1297
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":280,"completed":110,"skipped":1616,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:33:06.408: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 00:33:06.531: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Feb 14 00:33:06.561: INFO: Number of nodes with available pods: 0
Feb 14 00:33:06.561: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:33:08.571: INFO: Number of nodes with available pods: 0
Feb 14 00:33:08.571: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:33:10.192: INFO: Number of nodes with available pods: 0
Feb 14 00:33:10.192: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:33:10.590: INFO: Number of nodes with available pods: 0
Feb 14 00:33:10.590: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:33:11.580: INFO: Number of nodes with available pods: 0
Feb 14 00:33:11.580: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:33:13.932: INFO: Number of nodes with available pods: 0
Feb 14 00:33:13.932: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:33:14.766: INFO: Number of nodes with available pods: 0
Feb 14 00:33:14.767: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:33:15.665: INFO: Number of nodes with available pods: 0
Feb 14 00:33:15.665: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:33:16.621: INFO: Number of nodes with available pods: 1
Feb 14 00:33:16.622: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:33:17.584: INFO: Number of nodes with available pods: 2
Feb 14 00:33:17.585: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Feb 14 00:33:17.667: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:17.667: INFO: Wrong image for pod: daemon-set-svdpz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:18.688: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:18.688: INFO: Wrong image for pod: daemon-set-svdpz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:19.689: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:19.690: INFO: Wrong image for pod: daemon-set-svdpz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:20.687: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:20.688: INFO: Wrong image for pod: daemon-set-svdpz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:24.778: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:24.778: INFO: Wrong image for pod: daemon-set-svdpz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:25.689: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:25.690: INFO: Wrong image for pod: daemon-set-svdpz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:26.688: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:26.688: INFO: Wrong image for pod: daemon-set-svdpz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:26.688: INFO: Pod daemon-set-svdpz is not available
Feb 14 00:33:27.690: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:27.691: INFO: Wrong image for pod: daemon-set-svdpz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:27.691: INFO: Pod daemon-set-svdpz is not available
Feb 14 00:33:28.687: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:28.687: INFO: Wrong image for pod: daemon-set-svdpz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:28.687: INFO: Pod daemon-set-svdpz is not available
Feb 14 00:33:29.719: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:29.719: INFO: Wrong image for pod: daemon-set-svdpz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:29.719: INFO: Pod daemon-set-svdpz is not available
Feb 14 00:33:30.704: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:30.705: INFO: Wrong image for pod: daemon-set-svdpz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:30.705: INFO: Pod daemon-set-svdpz is not available
Feb 14 00:33:31.691: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:31.692: INFO: Wrong image for pod: daemon-set-svdpz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:31.692: INFO: Pod daemon-set-svdpz is not available
Feb 14 00:33:32.690: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:32.690: INFO: Wrong image for pod: daemon-set-svdpz. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:32.690: INFO: Pod daemon-set-svdpz is not available
Feb 14 00:33:33.695: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:33.695: INFO: Pod daemon-set-fnrh7 is not available
Feb 14 00:33:34.692: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:34.692: INFO: Pod daemon-set-fnrh7 is not available
Feb 14 00:33:36.050: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:36.050: INFO: Pod daemon-set-fnrh7 is not available
Feb 14 00:33:36.690: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:36.690: INFO: Pod daemon-set-fnrh7 is not available
Feb 14 00:33:37.688: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:37.689: INFO: Pod daemon-set-fnrh7 is not available
Feb 14 00:33:38.971: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:38.971: INFO: Pod daemon-set-fnrh7 is not available
Feb 14 00:33:39.802: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:39.802: INFO: Pod daemon-set-fnrh7 is not available
Feb 14 00:33:40.695: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:40.695: INFO: Pod daemon-set-fnrh7 is not available
Feb 14 00:33:41.691: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:42.687: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:43.737: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:44.691: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:45.696: INFO: Wrong image for pod: daemon-set-48df8. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Feb 14 00:33:45.697: INFO: Pod daemon-set-48df8 is not available
Feb 14 00:33:47.690: INFO: Pod daemon-set-lq6pz is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Feb 14 00:33:47.714: INFO: Number of nodes with available pods: 1
Feb 14 00:33:47.714: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:33:48.735: INFO: Number of nodes with available pods: 1
Feb 14 00:33:48.735: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:33:49.738: INFO: Number of nodes with available pods: 1
Feb 14 00:33:49.738: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:33:50.735: INFO: Number of nodes with available pods: 1
Feb 14 00:33:50.735: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:33:51.732: INFO: Number of nodes with available pods: 1
Feb 14 00:33:51.732: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:33:52.733: INFO: Number of nodes with available pods: 1
Feb 14 00:33:52.733: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:33:53.730: INFO: Number of nodes with available pods: 1
Feb 14 00:33:53.730: INFO: Node jerma-node is running more than one daemon pod
Feb 14 00:33:54.733: INFO: Number of nodes with available pods: 2
Feb 14 00:33:54.733: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3788, will wait for the garbage collector to delete the pods
Feb 14 00:33:54.819: INFO: Deleting DaemonSet.extensions daemon-set took: 8.631584ms
Feb 14 00:33:55.220: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.298905ms
Feb 14 00:34:04.130: INFO: Number of nodes with available pods: 0
Feb 14 00:34:04.130: INFO: Number of running nodes: 0, number of available pods: 0
Feb 14 00:34:04.173: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3788/daemonsets","resourceVersion":"8272810"},"items":null}

Feb 14 00:34:04.176: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3788/pods","resourceVersion":"8272810"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:34:04.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3788" for this suite.

• [SLOW TEST:57.811 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":280,"completed":111,"skipped":1621,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:34:04.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:34:12.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-930" for this suite.

• [SLOW TEST:8.183 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":280,"completed":112,"skipped":1649,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:34:12.407: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 14 00:34:12.560: INFO: Waiting up to 5m0s for pod "downwardapi-volume-394a58ae-d80f-4288-bd99-b2e0052edf2c" in namespace "downward-api-3070" to be "success or failure"
Feb 14 00:34:12.577: INFO: Pod "downwardapi-volume-394a58ae-d80f-4288-bd99-b2e0052edf2c": Phase="Pending", Reason="", readiness=false. Elapsed: 16.73167ms
Feb 14 00:34:14.589: INFO: Pod "downwardapi-volume-394a58ae-d80f-4288-bd99-b2e0052edf2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02841083s
Feb 14 00:34:16.603: INFO: Pod "downwardapi-volume-394a58ae-d80f-4288-bd99-b2e0052edf2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041986352s
Feb 14 00:34:18.625: INFO: Pod "downwardapi-volume-394a58ae-d80f-4288-bd99-b2e0052edf2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064660029s
Feb 14 00:34:21.254: INFO: Pod "downwardapi-volume-394a58ae-d80f-4288-bd99-b2e0052edf2c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.693341537s
Feb 14 00:34:23.264: INFO: Pod "downwardapi-volume-394a58ae-d80f-4288-bd99-b2e0052edf2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.703181924s
STEP: Saw pod success
Feb 14 00:34:23.264: INFO: Pod "downwardapi-volume-394a58ae-d80f-4288-bd99-b2e0052edf2c" satisfied condition "success or failure"
Feb 14 00:34:23.273: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-394a58ae-d80f-4288-bd99-b2e0052edf2c container client-container: 
STEP: delete the pod
Feb 14 00:34:23.358: INFO: Waiting for pod downwardapi-volume-394a58ae-d80f-4288-bd99-b2e0052edf2c to disappear
Feb 14 00:34:23.373: INFO: Pod downwardapi-volume-394a58ae-d80f-4288-bd99-b2e0052edf2c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:34:23.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3070" for this suite.

• [SLOW TEST:11.078 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":280,"completed":113,"skipped":1710,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:34:23.486: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-6e032704-2053-4017-bcd5-2daa7aa90f59
STEP: Creating a pod to test consume secrets
Feb 14 00:34:23.738: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-57798ff0-cc73-442d-8c70-ff56da2c4c1f" in namespace "projected-289" to be "success or failure"
Feb 14 00:34:23.828: INFO: Pod "pod-projected-secrets-57798ff0-cc73-442d-8c70-ff56da2c4c1f": Phase="Pending", Reason="", readiness=false. Elapsed: 89.872142ms
Feb 14 00:34:25.841: INFO: Pod "pod-projected-secrets-57798ff0-cc73-442d-8c70-ff56da2c4c1f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.102824515s
Feb 14 00:34:29.118: INFO: Pod "pod-projected-secrets-57798ff0-cc73-442d-8c70-ff56da2c4c1f": Phase="Pending", Reason="", readiness=false. Elapsed: 5.37969311s
Feb 14 00:34:31.123: INFO: Pod "pod-projected-secrets-57798ff0-cc73-442d-8c70-ff56da2c4c1f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.385058216s
Feb 14 00:34:33.135: INFO: Pod "pod-projected-secrets-57798ff0-cc73-442d-8c70-ff56da2c4c1f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.397269511s
STEP: Saw pod success
Feb 14 00:34:33.135: INFO: Pod "pod-projected-secrets-57798ff0-cc73-442d-8c70-ff56da2c4c1f" satisfied condition "success or failure"
Feb 14 00:34:33.142: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-57798ff0-cc73-442d-8c70-ff56da2c4c1f container projected-secret-volume-test: 
STEP: delete the pod
Feb 14 00:34:33.211: INFO: Waiting for pod pod-projected-secrets-57798ff0-cc73-442d-8c70-ff56da2c4c1f to disappear
Feb 14 00:34:33.228: INFO: Pod pod-projected-secrets-57798ff0-cc73-442d-8c70-ff56da2c4c1f no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:34:33.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-289" for this suite.

• [SLOW TEST:9.790 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":114,"skipped":1736,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:34:33.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-3818
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-3818
I0214 00:34:33.527732       9 runners.go:189] Created replication controller with name: externalname-service, namespace: services-3818, replica count: 2
I0214 00:34:36.579019       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 00:34:39.579727       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 00:34:42.581225       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 00:34:45.581991       9 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 14 00:34:45.582: INFO: Creating new exec pod
Feb 14 00:34:56.666: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3818 execpodlrftb -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Feb 14 00:34:57.073: INFO: stderr: "I0214 00:34:56.886680    2366 log.go:172] (0xc000a64d10) (0xc000a2c500) Create stream\nI0214 00:34:56.886900    2366 log.go:172] (0xc000a64d10) (0xc000a2c500) Stream added, broadcasting: 1\nI0214 00:34:56.893438    2366 log.go:172] (0xc000a64d10) Reply frame received for 1\nI0214 00:34:56.893519    2366 log.go:172] (0xc000a64d10) (0xc000a3a000) Create stream\nI0214 00:34:56.893543    2366 log.go:172] (0xc000a64d10) (0xc000a3a000) Stream added, broadcasting: 3\nI0214 00:34:56.895724    2366 log.go:172] (0xc000a64d10) Reply frame received for 3\nI0214 00:34:56.895753    2366 log.go:172] (0xc000a64d10) (0xc000a3a3c0) Create stream\nI0214 00:34:56.895766    2366 log.go:172] (0xc000a64d10) (0xc000a3a3c0) Stream added, broadcasting: 5\nI0214 00:34:56.897415    2366 log.go:172] (0xc000a64d10) Reply frame received for 5\nI0214 00:34:56.974074    2366 log.go:172] (0xc000a64d10) Data frame received for 5\nI0214 00:34:56.974205    2366 log.go:172] (0xc000a3a3c0) (5) Data frame handling\nI0214 00:34:56.974224    2366 log.go:172] (0xc000a3a3c0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0214 00:34:56.979997    2366 log.go:172] (0xc000a64d10) Data frame received for 5\nI0214 00:34:56.980030    2366 log.go:172] (0xc000a3a3c0) (5) Data frame handling\nI0214 00:34:56.980047    2366 log.go:172] (0xc000a3a3c0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0214 00:34:57.062043    2366 log.go:172] (0xc000a64d10) Data frame received for 1\nI0214 00:34:57.062240    2366 log.go:172] (0xc000a64d10) (0xc000a3a3c0) Stream removed, broadcasting: 5\nI0214 00:34:57.062304    2366 log.go:172] (0xc000a2c500) (1) Data frame handling\nI0214 00:34:57.062320    2366 log.go:172] (0xc000a2c500) (1) Data frame sent\nI0214 00:34:57.062376    2366 log.go:172] (0xc000a64d10) (0xc000a3a000) Stream removed, broadcasting: 3\nI0214 00:34:57.062439    2366 log.go:172] (0xc000a64d10) (0xc000a2c500) Stream removed, broadcasting: 1\nI0214 00:34:57.062475    2366 log.go:172] (0xc000a64d10) Go away received\nI0214 00:34:57.063577    2366 log.go:172] (0xc000a64d10) (0xc000a2c500) Stream removed, broadcasting: 1\nI0214 00:34:57.063616    2366 log.go:172] (0xc000a64d10) (0xc000a3a000) Stream removed, broadcasting: 3\nI0214 00:34:57.063627    2366 log.go:172] (0xc000a64d10) (0xc000a3a3c0) Stream removed, broadcasting: 5\n"
Feb 14 00:34:57.074: INFO: stdout: ""
Feb 14 00:34:57.076: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3818 execpodlrftb -- /bin/sh -x -c nc -zv -t -w 2 10.96.57.45 80'
Feb 14 00:34:57.406: INFO: stderr: "I0214 00:34:57.234976    2384 log.go:172] (0xc0009e4000) (0xc0006b7d60) Create stream\nI0214 00:34:57.235114    2384 log.go:172] (0xc0009e4000) (0xc0006b7d60) Stream added, broadcasting: 1\nI0214 00:34:57.243136    2384 log.go:172] (0xc0009e4000) Reply frame received for 1\nI0214 00:34:57.243255    2384 log.go:172] (0xc0009e4000) (0xc000658820) Create stream\nI0214 00:34:57.243279    2384 log.go:172] (0xc0009e4000) (0xc000658820) Stream added, broadcasting: 3\nI0214 00:34:57.245044    2384 log.go:172] (0xc0009e4000) Reply frame received for 3\nI0214 00:34:57.245162    2384 log.go:172] (0xc0009e4000) (0xc00037d4a0) Create stream\nI0214 00:34:57.245180    2384 log.go:172] (0xc0009e4000) (0xc00037d4a0) Stream added, broadcasting: 5\nI0214 00:34:57.246807    2384 log.go:172] (0xc0009e4000) Reply frame received for 5\nI0214 00:34:57.307772    2384 log.go:172] (0xc0009e4000) Data frame received for 5\nI0214 00:34:57.307798    2384 log.go:172] (0xc00037d4a0) (5) Data frame handling\nI0214 00:34:57.307826    2384 log.go:172] (0xc00037d4a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.57.45 80\nConnection to 10.96.57.45 80 port [tcp/http] succeeded!\nI0214 00:34:57.394107    2384 log.go:172] (0xc0009e4000) Data frame received for 1\nI0214 00:34:57.394345    2384 log.go:172] (0xc0009e4000) (0xc00037d4a0) Stream removed, broadcasting: 5\nI0214 00:34:57.394473    2384 log.go:172] (0xc0006b7d60) (1) Data frame handling\nI0214 00:34:57.394532    2384 log.go:172] (0xc0006b7d60) (1) Data frame sent\nI0214 00:34:57.394682    2384 log.go:172] (0xc0009e4000) (0xc000658820) Stream removed, broadcasting: 3\nI0214 00:34:57.394750    2384 log.go:172] (0xc0009e4000) (0xc0006b7d60) Stream removed, broadcasting: 1\nI0214 00:34:57.394896    2384 log.go:172] (0xc0009e4000) Go away received\nI0214 00:34:57.396459    2384 log.go:172] (0xc0009e4000) (0xc0006b7d60) Stream removed, broadcasting: 1\nI0214 00:34:57.396484    2384 log.go:172] (0xc0009e4000) (0xc000658820) Stream removed, broadcasting: 3\nI0214 00:34:57.396501    2384 log.go:172] (0xc0009e4000) (0xc00037d4a0) Stream removed, broadcasting: 5\n"
Feb 14 00:34:57.406: INFO: stdout: ""
Feb 14 00:34:57.406: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3818 execpodlrftb -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 31620'
Feb 14 00:34:57.784: INFO: stderr: "I0214 00:34:57.544181    2406 log.go:172] (0xc000a9a4d0) (0xc000a82b40) Create stream\nI0214 00:34:57.544297    2406 log.go:172] (0xc000a9a4d0) (0xc000a82b40) Stream added, broadcasting: 1\nI0214 00:34:57.553125    2406 log.go:172] (0xc000a9a4d0) Reply frame received for 1\nI0214 00:34:57.553175    2406 log.go:172] (0xc000a9a4d0) (0xc000642820) Create stream\nI0214 00:34:57.553183    2406 log.go:172] (0xc000a9a4d0) (0xc000642820) Stream added, broadcasting: 3\nI0214 00:34:57.554887    2406 log.go:172] (0xc000a9a4d0) Reply frame received for 3\nI0214 00:34:57.554963    2406 log.go:172] (0xc000a9a4d0) (0xc0005514a0) Create stream\nI0214 00:34:57.554974    2406 log.go:172] (0xc000a9a4d0) (0xc0005514a0) Stream added, broadcasting: 5\nI0214 00:34:57.557583    2406 log.go:172] (0xc000a9a4d0) Reply frame received for 5\nI0214 00:34:57.639973    2406 log.go:172] (0xc000a9a4d0) Data frame received for 5\nI0214 00:34:57.640535    2406 log.go:172] (0xc0005514a0) (5) Data frame handling\nI0214 00:34:57.640637    2406 log.go:172] (0xc0005514a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 31620\nI0214 00:34:57.641557    2406 log.go:172] (0xc000a9a4d0) Data frame received for 5\nI0214 00:34:57.641575    2406 log.go:172] (0xc0005514a0) (5) Data frame handling\nI0214 00:34:57.641589    2406 log.go:172] (0xc0005514a0) (5) Data frame sent\nConnection to 10.96.2.250 31620 port [tcp/31620] succeeded!\nI0214 00:34:57.764470    2406 log.go:172] (0xc000a9a4d0) Data frame received for 1\nI0214 00:34:57.764567    2406 log.go:172] (0xc000a9a4d0) (0xc0005514a0) Stream removed, broadcasting: 5\nI0214 00:34:57.764618    2406 log.go:172] (0xc000a82b40) (1) Data frame handling\nI0214 00:34:57.764635    2406 log.go:172] (0xc000a82b40) (1) Data frame sent\nI0214 00:34:57.764678    2406 log.go:172] (0xc000a9a4d0) (0xc000642820) Stream removed, broadcasting: 3\nI0214 00:34:57.764750    2406 log.go:172] (0xc000a9a4d0) (0xc000a82b40) Stream removed, broadcasting: 1\nI0214 00:34:57.764792    2406 log.go:172] (0xc000a9a4d0) Go away received\nI0214 00:34:57.766138    2406 log.go:172] (0xc000a9a4d0) (0xc000a82b40) Stream removed, broadcasting: 1\nI0214 00:34:57.766160    2406 log.go:172] (0xc000a9a4d0) (0xc000642820) Stream removed, broadcasting: 3\nI0214 00:34:57.766171    2406 log.go:172] (0xc000a9a4d0) (0xc0005514a0) Stream removed, broadcasting: 5\n"
Feb 14 00:34:57.785: INFO: stdout: ""
Feb 14 00:34:57.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3818 execpodlrftb -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 31620'
Feb 14 00:34:58.194: INFO: stderr: "I0214 00:34:57.983113    2427 log.go:172] (0xc0009460b0) (0xc0003add60) Create stream\nI0214 00:34:57.983217    2427 log.go:172] (0xc0009460b0) (0xc0003add60) Stream added, broadcasting: 1\nI0214 00:34:57.991626    2427 log.go:172] (0xc0009460b0) Reply frame received for 1\nI0214 00:34:57.991728    2427 log.go:172] (0xc0009460b0) (0xc000714460) Create stream\nI0214 00:34:57.991743    2427 log.go:172] (0xc0009460b0) (0xc000714460) Stream added, broadcasting: 3\nI0214 00:34:57.994827    2427 log.go:172] (0xc0009460b0) Reply frame received for 3\nI0214 00:34:57.994946    2427 log.go:172] (0xc0009460b0) (0xc0007b8000) Create stream\nI0214 00:34:57.994997    2427 log.go:172] (0xc0009460b0) (0xc0007b8000) Stream added, broadcasting: 5\nI0214 00:34:57.997041    2427 log.go:172] (0xc0009460b0) Reply frame received for 5\nI0214 00:34:58.120698    2427 log.go:172] (0xc0009460b0) Data frame received for 5\nI0214 00:34:58.120792    2427 log.go:172] (0xc0007b8000) (5) Data frame handling\nI0214 00:34:58.120820    2427 log.go:172] (0xc0007b8000) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 31620\nConnection to 10.96.1.234 31620 port [tcp/31620] succeeded!\nI0214 00:34:58.181183    2427 log.go:172] (0xc0009460b0) Data frame received for 1\nI0214 00:34:58.181599    2427 log.go:172] (0xc0003add60) (1) Data frame handling\nI0214 00:34:58.181661    2427 log.go:172] (0xc0003add60) (1) Data frame sent\nI0214 00:34:58.181756    2427 log.go:172] (0xc0009460b0) (0xc0003add60) Stream removed, broadcasting: 1\nI0214 00:34:58.183231    2427 log.go:172] (0xc0009460b0) (0xc000714460) Stream removed, broadcasting: 3\nI0214 00:34:58.183339    2427 log.go:172] (0xc0009460b0) (0xc0007b8000) Stream removed, broadcasting: 5\nI0214 00:34:58.183390    2427 log.go:172] (0xc0009460b0) Go away received\nI0214 00:34:58.183512    2427 log.go:172] (0xc0009460b0) (0xc0003add60) Stream removed, broadcasting: 1\nI0214 00:34:58.183535    2427 log.go:172] (0xc0009460b0) (0xc000714460) Stream removed, broadcasting: 3\nI0214 00:34:58.183543    2427 log.go:172] (0xc0009460b0) (0xc0007b8000) Stream removed, broadcasting: 5\n"
Feb 14 00:34:58.194: INFO: stdout: ""
Feb 14 00:34:58.194: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:34:58.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3818" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:25.002 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":280,"completed":115,"skipped":1746,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:34:58.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 14 00:34:58.389: INFO: Waiting up to 5m0s for pod "pod-00316b1e-42bc-495d-9f23-7c7946f9cb96" in namespace "emptydir-4894" to be "success or failure"
Feb 14 00:34:58.402: INFO: Pod "pod-00316b1e-42bc-495d-9f23-7c7946f9cb96": Phase="Pending", Reason="", readiness=false. Elapsed: 13.007698ms
Feb 14 00:35:00.412: INFO: Pod "pod-00316b1e-42bc-495d-9f23-7c7946f9cb96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023287924s
Feb 14 00:35:02.421: INFO: Pod "pod-00316b1e-42bc-495d-9f23-7c7946f9cb96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032411358s
Feb 14 00:35:04.428: INFO: Pod "pod-00316b1e-42bc-495d-9f23-7c7946f9cb96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039377888s
Feb 14 00:35:07.251: INFO: Pod "pod-00316b1e-42bc-495d-9f23-7c7946f9cb96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.862046851s
Feb 14 00:35:10.112: INFO: Pod "pod-00316b1e-42bc-495d-9f23-7c7946f9cb96": Phase="Pending", Reason="", readiness=false. Elapsed: 11.722899923s
Feb 14 00:35:12.131: INFO: Pod "pod-00316b1e-42bc-495d-9f23-7c7946f9cb96": Phase="Pending", Reason="", readiness=false. Elapsed: 13.742275872s
Feb 14 00:35:14.138: INFO: Pod "pod-00316b1e-42bc-495d-9f23-7c7946f9cb96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.749711951s
STEP: Saw pod success
Feb 14 00:35:14.139: INFO: Pod "pod-00316b1e-42bc-495d-9f23-7c7946f9cb96" satisfied condition "success or failure"
Feb 14 00:35:14.143: INFO: Trying to get logs from node jerma-node pod pod-00316b1e-42bc-495d-9f23-7c7946f9cb96 container test-container: 
STEP: delete the pod
Feb 14 00:35:14.224: INFO: Waiting for pod pod-00316b1e-42bc-495d-9f23-7c7946f9cb96 to disappear
Feb 14 00:35:14.234: INFO: Pod pod-00316b1e-42bc-495d-9f23-7c7946f9cb96 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:35:14.234: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-4894" for this suite.

• [SLOW TEST:16.045 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":116,"skipped":1749,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:35:14.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 00:35:14.516: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-9195
I0214 00:35:14.571991       9 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-9195, replica count: 1
I0214 00:35:15.624501       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 00:35:16.625344       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 00:35:17.626132       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 00:35:18.626740       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 00:35:19.627971       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 00:35:20.629106       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 00:35:21.629741       9 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 14 00:35:21.800: INFO: Created: latency-svc-qnslm
Feb 14 00:35:21.839: INFO: Got endpoints: latency-svc-qnslm [108.985157ms]
Feb 14 00:35:21.939: INFO: Created: latency-svc-x9nx9
Feb 14 00:35:21.943: INFO: Got endpoints: latency-svc-x9nx9 [103.600252ms]
Feb 14 00:35:21.970: INFO: Created: latency-svc-k6z4g
Feb 14 00:35:21.974: INFO: Got endpoints: latency-svc-k6z4g [132.130669ms]
Feb 14 00:35:22.047: INFO: Created: latency-svc-jnns2
Feb 14 00:35:22.067: INFO: Got endpoints: latency-svc-jnns2 [225.18357ms]
Feb 14 00:35:22.070: INFO: Created: latency-svc-w4p6f
Feb 14 00:35:22.089: INFO: Got endpoints: latency-svc-w4p6f [247.409601ms]
Feb 14 00:35:22.093: INFO: Created: latency-svc-cg7rf
Feb 14 00:35:22.107: INFO: Got endpoints: latency-svc-cg7rf [267.583532ms]
Feb 14 00:35:22.119: INFO: Created: latency-svc-vfr5k
Feb 14 00:35:22.230: INFO: Got endpoints: latency-svc-vfr5k [390.118355ms]
Feb 14 00:35:22.234: INFO: Created: latency-svc-kkhql
Feb 14 00:35:22.238: INFO: Got endpoints: latency-svc-kkhql [398.673316ms]
Feb 14 00:35:22.267: INFO: Created: latency-svc-ct2tg
Feb 14 00:35:22.274: INFO: Got endpoints: latency-svc-ct2tg [434.628451ms]
Feb 14 00:35:22.300: INFO: Created: latency-svc-8gghx
Feb 14 00:35:22.317: INFO: Got endpoints: latency-svc-8gghx [475.885396ms]
Feb 14 00:35:22.387: INFO: Created: latency-svc-wxn65
Feb 14 00:35:22.392: INFO: Got endpoints: latency-svc-wxn65 [552.61083ms]
Feb 14 00:35:22.418: INFO: Created: latency-svc-gx7fn
Feb 14 00:35:22.427: INFO: Got endpoints: latency-svc-gx7fn [586.215003ms]
Feb 14 00:35:22.456: INFO: Created: latency-svc-hf6bf
Feb 14 00:35:22.457: INFO: Got endpoints: latency-svc-hf6bf [615.15704ms]
Feb 14 00:35:22.520: INFO: Created: latency-svc-l2f7g
Feb 14 00:35:22.532: INFO: Got endpoints: latency-svc-l2f7g [691.01835ms]
Feb 14 00:35:22.582: INFO: Created: latency-svc-nhmj6
Feb 14 00:35:22.583: INFO: Got endpoints: latency-svc-nhmj6 [742.683711ms]
Feb 14 00:35:22.673: INFO: Created: latency-svc-bn8w2
Feb 14 00:35:22.682: INFO: Got endpoints: latency-svc-bn8w2 [840.876021ms]
Feb 14 00:35:22.745: INFO: Created: latency-svc-5mnfb
Feb 14 00:35:22.764: INFO: Got endpoints: latency-svc-5mnfb [820.000213ms]
Feb 14 00:35:22.828: INFO: Created: latency-svc-sxw6q
Feb 14 00:35:22.832: INFO: Got endpoints: latency-svc-sxw6q [858.135451ms]
Feb 14 00:35:22.857: INFO: Created: latency-svc-ztgt5
Feb 14 00:35:22.880: INFO: Created: latency-svc-fq6sg
Feb 14 00:35:22.880: INFO: Got endpoints: latency-svc-ztgt5 [812.679422ms]
Feb 14 00:35:22.886: INFO: Got endpoints: latency-svc-fq6sg [796.521908ms]
Feb 14 00:35:22.906: INFO: Created: latency-svc-4m5hn
Feb 14 00:35:22.922: INFO: Got endpoints: latency-svc-4m5hn [814.259296ms]
Feb 14 00:35:22.986: INFO: Created: latency-svc-q4hj6
Feb 14 00:35:22.990: INFO: Got endpoints: latency-svc-q4hj6 [759.533189ms]
Feb 14 00:35:23.067: INFO: Created: latency-svc-cjs6p
Feb 14 00:35:23.089: INFO: Created: latency-svc-przh5
Feb 14 00:35:23.217: INFO: Got endpoints: latency-svc-cjs6p [979.005916ms]
Feb 14 00:35:23.218: INFO: Got endpoints: latency-svc-przh5 [943.594008ms]
Feb 14 00:35:23.252: INFO: Created: latency-svc-nldvj
Feb 14 00:35:23.257: INFO: Got endpoints: latency-svc-nldvj [939.733212ms]
Feb 14 00:35:23.285: INFO: Created: latency-svc-wpkpx
Feb 14 00:35:23.291: INFO: Got endpoints: latency-svc-wpkpx [898.739524ms]
Feb 14 00:35:23.358: INFO: Created: latency-svc-f6phv
Feb 14 00:35:23.366: INFO: Got endpoints: latency-svc-f6phv [939.301201ms]
Feb 14 00:35:23.394: INFO: Created: latency-svc-pgrb7
Feb 14 00:35:23.402: INFO: Got endpoints: latency-svc-pgrb7 [944.781769ms]
Feb 14 00:35:23.425: INFO: Created: latency-svc-cgvw7
Feb 14 00:35:23.510: INFO: Got endpoints: latency-svc-cgvw7 [977.766775ms]
Feb 14 00:35:23.518: INFO: Created: latency-svc-d4z2c
Feb 14 00:35:23.519: INFO: Got endpoints: latency-svc-d4z2c [936.336457ms]
Feb 14 00:35:23.554: INFO: Created: latency-svc-hc9jk
Feb 14 00:35:23.566: INFO: Got endpoints: latency-svc-hc9jk [884.207832ms]
Feb 14 00:35:23.603: INFO: Created: latency-svc-tgsw5
Feb 14 00:35:23.609: INFO: Got endpoints: latency-svc-tgsw5 [844.629442ms]
Feb 14 00:35:23.727: INFO: Created: latency-svc-k4jrn
Feb 14 00:35:23.923: INFO: Got endpoints: latency-svc-k4jrn [1.090899938s]
Feb 14 00:35:23.960: INFO: Created: latency-svc-ptfdm
Feb 14 00:35:24.013: INFO: Got endpoints: latency-svc-ptfdm [1.132968742s]
Feb 14 00:35:24.015: INFO: Created: latency-svc-z8qrw
Feb 14 00:35:25.024: INFO: Got endpoints: latency-svc-z8qrw [2.137471247s]
Feb 14 00:35:25.070: INFO: Created: latency-svc-6fzb4
Feb 14 00:35:25.087: INFO: Got endpoints: latency-svc-6fzb4 [2.165481314s]
Feb 14 00:35:25.965: INFO: Created: latency-svc-phcvb
Feb 14 00:35:25.973: INFO: Got endpoints: latency-svc-phcvb [2.982719716s]
Feb 14 00:35:26.160: INFO: Created: latency-svc-6fbk6
Feb 14 00:35:26.166: INFO: Got endpoints: latency-svc-6fbk6 [2.948712916s]
Feb 14 00:35:26.344: INFO: Created: latency-svc-s2jsl
Feb 14 00:35:26.379: INFO: Got endpoints: latency-svc-s2jsl [3.160892067s]
Feb 14 00:35:26.415: INFO: Created: latency-svc-dv7pn
Feb 14 00:35:26.423: INFO: Got endpoints: latency-svc-dv7pn [3.166344618s]
Feb 14 00:35:26.509: INFO: Created: latency-svc-g7tqd
Feb 14 00:35:26.546: INFO: Created: latency-svc-v27b7
Feb 14 00:35:26.547: INFO: Got endpoints: latency-svc-g7tqd [3.25536061s]
Feb 14 00:35:26.559: INFO: Got endpoints: latency-svc-v27b7 [3.192666717s]
Feb 14 00:35:26.666: INFO: Created: latency-svc-w97dv
Feb 14 00:35:26.710: INFO: Got endpoints: latency-svc-w97dv [3.307610522s]
Feb 14 00:35:26.713: INFO: Created: latency-svc-2d98t
Feb 14 00:35:26.714: INFO: Got endpoints: latency-svc-2d98t [3.204134786s]
Feb 14 00:35:26.751: INFO: Created: latency-svc-xqxqr
Feb 14 00:35:26.817: INFO: Got endpoints: latency-svc-xqxqr [3.297193605s]
Feb 14 00:35:26.819: INFO: Created: latency-svc-b2hmk
Feb 14 00:35:26.856: INFO: Got endpoints: latency-svc-b2hmk [3.289873689s]
Feb 14 00:35:26.883: INFO: Created: latency-svc-d47kt
Feb 14 00:35:26.892: INFO: Got endpoints: latency-svc-d47kt [3.283227809s]
Feb 14 00:35:27.003: INFO: Created: latency-svc-hfbfn
Feb 14 00:35:27.034: INFO: Created: latency-svc-bc24x
Feb 14 00:35:27.038: INFO: Got endpoints: latency-svc-hfbfn [3.113924717s]
Feb 14 00:35:27.044: INFO: Got endpoints: latency-svc-bc24x [3.030558418s]
Feb 14 00:35:27.086: INFO: Created: latency-svc-qjt4z
Feb 14 00:35:27.095: INFO: Got endpoints: latency-svc-qjt4z [2.069896951s]
Feb 14 00:35:27.152: INFO: Created: latency-svc-rhd2b
Feb 14 00:35:27.175: INFO: Got endpoints: latency-svc-rhd2b [2.087637062s]
Feb 14 00:35:27.177: INFO: Created: latency-svc-zwqtx
Feb 14 00:35:27.212: INFO: Created: latency-svc-vvj6g
Feb 14 00:35:27.212: INFO: Got endpoints: latency-svc-zwqtx [1.238557253s]
Feb 14 00:35:27.232: INFO: Got endpoints: latency-svc-vvj6g [1.065557137s]
Feb 14 00:35:27.259: INFO: Created: latency-svc-vkjlw
Feb 14 00:35:27.312: INFO: Got endpoints: latency-svc-vkjlw [932.842703ms]
Feb 14 00:35:27.329: INFO: Created: latency-svc-kq5z7
Feb 14 00:35:27.330: INFO: Got endpoints: latency-svc-kq5z7 [905.928838ms]
Feb 14 00:35:27.364: INFO: Created: latency-svc-97zsz
Feb 14 00:35:27.368: INFO: Got endpoints: latency-svc-97zsz [820.966104ms]
Feb 14 00:35:27.402: INFO: Created: latency-svc-z5bwk
Feb 14 00:35:27.409: INFO: Got endpoints: latency-svc-z5bwk [849.154181ms]
Feb 14 00:35:27.411: INFO: Created: latency-svc-tz6kw
Feb 14 00:35:27.465: INFO: Got endpoints: latency-svc-tz6kw [754.503154ms]
Feb 14 00:35:27.476: INFO: Created: latency-svc-nk7dd
Feb 14 00:35:27.496: INFO: Got endpoints: latency-svc-nk7dd [781.812059ms]
Feb 14 00:35:27.498: INFO: Created: latency-svc-d7slt
Feb 14 00:35:27.510: INFO: Got endpoints: latency-svc-d7slt [692.847415ms]
Feb 14 00:35:27.529: INFO: Created: latency-svc-vxdg4
Feb 14 00:35:27.535: INFO: Got endpoints: latency-svc-vxdg4 [678.506731ms]
Feb 14 00:35:27.554: INFO: Created: latency-svc-6zrsg
Feb 14 00:35:27.608: INFO: Got endpoints: latency-svc-6zrsg [715.605293ms]
Feb 14 00:35:27.617: INFO: Created: latency-svc-68dxb
Feb 14 00:35:27.623: INFO: Got endpoints: latency-svc-68dxb [585.148474ms]
Feb 14 00:35:27.657: INFO: Created: latency-svc-f4cgs
Feb 14 00:35:27.669: INFO: Got endpoints: latency-svc-f4cgs [625.160389ms]
Feb 14 00:35:27.695: INFO: Created: latency-svc-zlgmn
Feb 14 00:35:27.761: INFO: Got endpoints: latency-svc-zlgmn [665.985306ms]
Feb 14 00:35:27.800: INFO: Created: latency-svc-kfwg7
Feb 14 00:35:27.800: INFO: Created: latency-svc-qzk7h
Feb 14 00:35:27.801: INFO: Got endpoints: latency-svc-qzk7h [625.321621ms]
Feb 14 00:35:27.820: INFO: Got endpoints: latency-svc-kfwg7 [608.255699ms]
Feb 14 00:35:27.842: INFO: Created: latency-svc-nqb5b
Feb 14 00:35:27.907: INFO: Created: latency-svc-x8ksj
Feb 14 00:35:27.910: INFO: Got endpoints: latency-svc-nqb5b [678.536519ms]
Feb 14 00:35:27.969: INFO: Got endpoints: latency-svc-x8ksj [657.384112ms]
Feb 14 00:35:27.975: INFO: Created: latency-svc-6qw2v
Feb 14 00:35:28.045: INFO: Got endpoints: latency-svc-6qw2v [715.52116ms]
Feb 14 00:35:28.049: INFO: Created: latency-svc-tdsbv
Feb 14 00:35:28.069: INFO: Got endpoints: latency-svc-tdsbv [701.197889ms]
Feb 14 00:35:28.100: INFO: Created: latency-svc-7zcs4
Feb 14 00:35:28.127: INFO: Got endpoints: latency-svc-7zcs4 [717.565452ms]
Feb 14 00:35:28.127: INFO: Created: latency-svc-597p5
Feb 14 00:35:28.131: INFO: Got endpoints: latency-svc-597p5 [666.191454ms]
Feb 14 00:35:28.223: INFO: Created: latency-svc-lzvqt
Feb 14 00:35:28.230: INFO: Got endpoints: latency-svc-lzvqt [733.590656ms]
Feb 14 00:35:28.258: INFO: Created: latency-svc-cn9bs
Feb 14 00:35:28.269: INFO: Got endpoints: latency-svc-cn9bs [759.461936ms]
Feb 14 00:35:28.289: INFO: Created: latency-svc-zg5rv
Feb 14 00:35:28.301: INFO: Got endpoints: latency-svc-zg5rv [765.453994ms]
Feb 14 00:35:28.324: INFO: Created: latency-svc-2psgv
Feb 14 00:35:28.388: INFO: Got endpoints: latency-svc-2psgv [780.40703ms]
Feb 14 00:35:28.395: INFO: Created: latency-svc-h78zz
Feb 14 00:35:28.418: INFO: Got endpoints: latency-svc-h78zz [794.565165ms]
Feb 14 00:35:28.445: INFO: Created: latency-svc-n8khl
Feb 14 00:35:28.460: INFO: Got endpoints: latency-svc-n8khl [790.975264ms]
Feb 14 00:35:28.532: INFO: Created: latency-svc-zxwrx
Feb 14 00:35:28.554: INFO: Got endpoints: latency-svc-zxwrx [792.911126ms]
Feb 14 00:35:28.588: INFO: Created: latency-svc-jxn5p
Feb 14 00:35:28.597: INFO: Got endpoints: latency-svc-jxn5p [796.445767ms]
Feb 14 00:35:28.635: INFO: Created: latency-svc-ndrwf
Feb 14 00:35:28.698: INFO: Got endpoints: latency-svc-ndrwf [876.936783ms]
Feb 14 00:35:28.706: INFO: Created: latency-svc-h78h6
Feb 14 00:35:28.734: INFO: Got endpoints: latency-svc-h78h6 [823.501967ms]
Feb 14 00:35:28.785: INFO: Created: latency-svc-ndr5n
Feb 14 00:35:28.882: INFO: Got endpoints: latency-svc-ndr5n [911.840299ms]
Feb 14 00:35:28.898: INFO: Created: latency-svc-w2svk
Feb 14 00:35:28.914: INFO: Got endpoints: latency-svc-w2svk [868.48285ms]
Feb 14 00:35:28.956: INFO: Created: latency-svc-rs7k8
Feb 14 00:35:29.090: INFO: Got endpoints: latency-svc-rs7k8 [1.020410974s]
Feb 14 00:35:29.092: INFO: Created: latency-svc-9qmnn
Feb 14 00:35:29.098: INFO: Got endpoints: latency-svc-9qmnn [970.997591ms]
Feb 14 00:35:29.141: INFO: Created: latency-svc-mrhnt
Feb 14 00:35:29.168: INFO: Got endpoints: latency-svc-mrhnt [1.036853067s]
Feb 14 00:35:29.302: INFO: Created: latency-svc-p22vh
Feb 14 00:35:29.302: INFO: Got endpoints: latency-svc-p22vh [1.071253477s]
Feb 14 00:35:29.365: INFO: Created: latency-svc-cwqsd
Feb 14 00:35:29.377: INFO: Got endpoints: latency-svc-cwqsd [1.107309717s]
Feb 14 00:35:29.502: INFO: Created: latency-svc-cbt4z
Feb 14 00:35:29.540: INFO: Got endpoints: latency-svc-cbt4z [1.238794881s]
Feb 14 00:35:29.546: INFO: Created: latency-svc-xxdlb
Feb 14 00:35:29.549: INFO: Got endpoints: latency-svc-xxdlb [1.16075163s]
Feb 14 00:35:29.577: INFO: Created: latency-svc-pzltn
Feb 14 00:35:29.583: INFO: Got endpoints: latency-svc-pzltn [1.164505944s]
Feb 14 00:35:29.648: INFO: Created: latency-svc-8xvrq
Feb 14 00:35:29.656: INFO: Got endpoints: latency-svc-8xvrq [1.19564532s]
Feb 14 00:35:29.682: INFO: Created: latency-svc-vpj82
Feb 14 00:35:29.707: INFO: Got endpoints: latency-svc-vpj82 [1.152538631s]
Feb 14 00:35:29.717: INFO: Created: latency-svc-c552v
Feb 14 00:35:29.721: INFO: Got endpoints: latency-svc-c552v [1.123700223s]
Feb 14 00:35:29.811: INFO: Created: latency-svc-5fjzq
Feb 14 00:35:29.815: INFO: Got endpoints: latency-svc-5fjzq [1.116698796s]
Feb 14 00:35:29.869: INFO: Created: latency-svc-zltkn
Feb 14 00:35:29.876: INFO: Got endpoints: latency-svc-zltkn [1.141395016s]
Feb 14 00:35:30.010: INFO: Created: latency-svc-2dhhq
Feb 14 00:35:30.012: INFO: Got endpoints: latency-svc-2dhhq [1.129659373s]
Feb 14 00:35:30.042: INFO: Created: latency-svc-mvlxb
Feb 14 00:35:30.067: INFO: Got endpoints: latency-svc-mvlxb [1.152558914s]
Feb 14 00:35:30.098: INFO: Created: latency-svc-dfcpm
Feb 14 00:35:30.147: INFO: Got endpoints: latency-svc-dfcpm [1.056646752s]
Feb 14 00:35:30.153: INFO: Created: latency-svc-cbxdj
Feb 14 00:35:30.160: INFO: Got endpoints: latency-svc-cbxdj [1.062326946s]
Feb 14 00:35:30.188: INFO: Created: latency-svc-zkpmf
Feb 14 00:35:30.204: INFO: Got endpoints: latency-svc-zkpmf [1.035650274s]
Feb 14 00:35:30.224: INFO: Created: latency-svc-xdjcm
Feb 14 00:35:30.234: INFO: Got endpoints: latency-svc-xdjcm [932.347557ms]
Feb 14 00:35:30.368: INFO: Created: latency-svc-fjmmp
Feb 14 00:35:30.433: INFO: Got endpoints: latency-svc-fjmmp [1.055552912s]
Feb 14 00:35:30.456: INFO: Created: latency-svc-fvjkq
Feb 14 00:35:30.460: INFO: Got endpoints: latency-svc-fvjkq [919.672092ms]
Feb 14 00:35:30.515: INFO: Created: latency-svc-jn5pn
Feb 14 00:35:30.533: INFO: Got endpoints: latency-svc-jn5pn [983.177246ms]
Feb 14 00:35:30.538: INFO: Created: latency-svc-p57dq
Feb 14 00:35:30.549: INFO: Got endpoints: latency-svc-p57dq [965.934967ms]
Feb 14 00:35:30.571: INFO: Created: latency-svc-65s5z
Feb 14 00:35:30.629: INFO: Got endpoints: latency-svc-65s5z [973.051336ms]
Feb 14 00:35:30.639: INFO: Created: latency-svc-nch9b
Feb 14 00:35:30.659: INFO: Created: latency-svc-6slhm
Feb 14 00:35:30.660: INFO: Got endpoints: latency-svc-nch9b [952.629217ms]
Feb 14 00:35:30.678: INFO: Got endpoints: latency-svc-6slhm [956.858451ms]
Feb 14 00:35:30.722: INFO: Created: latency-svc-zpcjx
Feb 14 00:35:30.769: INFO: Got endpoints: latency-svc-zpcjx [954.111566ms]
Feb 14 00:35:30.776: INFO: Created: latency-svc-zvdtf
Feb 14 00:35:30.801: INFO: Got endpoints: latency-svc-zvdtf [925.23418ms]
Feb 14 00:35:30.825: INFO: Created: latency-svc-8w6tj
Feb 14 00:35:30.831: INFO: Got endpoints: latency-svc-8w6tj [818.309432ms]
Feb 14 00:35:30.861: INFO: Created: latency-svc-265vd
Feb 14 00:35:30.866: INFO: Got endpoints: latency-svc-265vd [799.057637ms]
Feb 14 00:35:30.904: INFO: Created: latency-svc-8nm94
Feb 14 00:35:30.961: INFO: Got endpoints: latency-svc-8nm94 [813.004881ms]
Feb 14 00:35:30.963: INFO: Created: latency-svc-kjf7x
Feb 14 00:35:30.972: INFO: Got endpoints: latency-svc-kjf7x [811.738981ms]
Feb 14 00:35:30.996: INFO: Created: latency-svc-h95qt
Feb 14 00:35:31.069: INFO: Got endpoints: latency-svc-h95qt [865.342371ms]
Feb 14 00:35:31.080: INFO: Created: latency-svc-vnfcm
Feb 14 00:35:31.092: INFO: Got endpoints: latency-svc-vnfcm [858.117636ms]
Feb 14 00:35:31.133: INFO: Created: latency-svc-6tgrh
Feb 14 00:35:31.135: INFO: Got endpoints: latency-svc-6tgrh [702.242622ms]
Feb 14 00:35:31.194: INFO: Created: latency-svc-mkmvf
Feb 14 00:35:31.195: INFO: Got endpoints: latency-svc-mkmvf [735.250129ms]
Feb 14 00:35:31.236: INFO: Created: latency-svc-smgwl
Feb 14 00:35:31.240: INFO: Got endpoints: latency-svc-smgwl [706.378551ms]
Feb 14 00:35:31.270: INFO: Created: latency-svc-kbfsd
Feb 14 00:35:31.288: INFO: Got endpoints: latency-svc-kbfsd [738.627325ms]
Feb 14 00:35:31.350: INFO: Created: latency-svc-q85tl
Feb 14 00:35:31.356: INFO: Got endpoints: latency-svc-q85tl [725.860881ms]
Feb 14 00:35:31.388: INFO: Created: latency-svc-4fx6d
Feb 14 00:35:31.398: INFO: Got endpoints: latency-svc-4fx6d [738.050602ms]
Feb 14 00:35:31.423: INFO: Created: latency-svc-rvvvp
Feb 14 00:35:31.491: INFO: Got endpoints: latency-svc-rvvvp [811.399817ms]
Feb 14 00:35:31.500: INFO: Created: latency-svc-645hw
Feb 14 00:35:31.503: INFO: Got endpoints: latency-svc-645hw [733.464423ms]
Feb 14 00:35:31.516: INFO: Created: latency-svc-d7lmt
Feb 14 00:35:31.538: INFO: Got endpoints: latency-svc-d7lmt [736.365139ms]
Feb 14 00:35:31.542: INFO: Created: latency-svc-tqfwd
Feb 14 00:35:31.560: INFO: Got endpoints: latency-svc-tqfwd [728.711369ms]
Feb 14 00:35:31.564: INFO: Created: latency-svc-jp9hn
Feb 14 00:35:31.590: INFO: Created: latency-svc-bhsbb
Feb 14 00:35:31.590: INFO: Got endpoints: latency-svc-jp9hn [724.147448ms]
Feb 14 00:35:31.657: INFO: Got endpoints: latency-svc-bhsbb [696.600717ms]
Feb 14 00:35:31.661: INFO: Created: latency-svc-2wqn5
Feb 14 00:35:31.663: INFO: Got endpoints: latency-svc-2wqn5 [690.109803ms]
Feb 14 00:35:31.689: INFO: Created: latency-svc-t59jr
Feb 14 00:35:31.691: INFO: Got endpoints: latency-svc-t59jr [621.656741ms]
Feb 14 00:35:31.713: INFO: Created: latency-svc-ctqx2
Feb 14 00:35:31.716: INFO: Got endpoints: latency-svc-ctqx2 [53.10381ms]
Feb 14 00:35:31.775: INFO: Created: latency-svc-btjgz
Feb 14 00:35:31.823: INFO: Created: latency-svc-j9nc8
Feb 14 00:35:31.824: INFO: Got endpoints: latency-svc-btjgz [730.952579ms]
Feb 14 00:35:31.843: INFO: Got endpoints: latency-svc-j9nc8 [707.531476ms]
Feb 14 00:35:31.944: INFO: Created: latency-svc-524q8
Feb 14 00:35:31.979: INFO: Got endpoints: latency-svc-524q8 [783.68914ms]
Feb 14 00:35:31.982: INFO: Created: latency-svc-fdfk9
Feb 14 00:35:31.985: INFO: Got endpoints: latency-svc-fdfk9 [745.390512ms]
Feb 14 00:35:32.009: INFO: Created: latency-svc-9ksgv
Feb 14 00:35:32.093: INFO: Got endpoints: latency-svc-9ksgv [804.918203ms]
Feb 14 00:35:32.096: INFO: Created: latency-svc-25kzx
Feb 14 00:35:32.107: INFO: Got endpoints: latency-svc-25kzx [750.687228ms]
Feb 14 00:35:32.128: INFO: Created: latency-svc-9rr9t
Feb 14 00:35:32.144: INFO: Got endpoints: latency-svc-9rr9t [745.607226ms]
Feb 14 00:35:32.154: INFO: Created: latency-svc-fxgkl
Feb 14 00:35:32.176: INFO: Got endpoints: latency-svc-fxgkl [685.477536ms]
Feb 14 00:35:32.228: INFO: Created: latency-svc-cn89g
Feb 14 00:35:32.240: INFO: Got endpoints: latency-svc-cn89g [737.187242ms]
Feb 14 00:35:32.265: INFO: Created: latency-svc-x4m6b
Feb 14 00:35:32.287: INFO: Got endpoints: latency-svc-x4m6b [748.42535ms]
Feb 14 00:35:32.372: INFO: Created: latency-svc-mtkpk
Feb 14 00:35:32.402: INFO: Got endpoints: latency-svc-mtkpk [842.439786ms]
Feb 14 00:35:32.406: INFO: Created: latency-svc-bnj4h
Feb 14 00:35:32.429: INFO: Got endpoints: latency-svc-bnj4h [838.924885ms]
Feb 14 00:35:32.463: INFO: Created: latency-svc-bsz8p
Feb 14 00:35:32.465: INFO: Got endpoints: latency-svc-bsz8p [806.929048ms]
Feb 14 00:35:32.536: INFO: Created: latency-svc-nhb7p
Feb 14 00:35:32.543: INFO: Got endpoints: latency-svc-nhb7p [851.886458ms]
Feb 14 00:35:32.569: INFO: Created: latency-svc-wq9vt
Feb 14 00:35:32.575: INFO: Got endpoints: latency-svc-wq9vt [858.934949ms]
Feb 14 00:35:32.596: INFO: Created: latency-svc-gqlc6
Feb 14 00:35:32.602: INFO: Got endpoints: latency-svc-gqlc6 [777.72461ms]
Feb 14 00:35:32.619: INFO: Created: latency-svc-56csd
Feb 14 00:35:32.664: INFO: Got endpoints: latency-svc-56csd [820.955257ms]
Feb 14 00:35:32.677: INFO: Created: latency-svc-q62tl
Feb 14 00:35:32.686: INFO: Got endpoints: latency-svc-q62tl [706.446489ms]
Feb 14 00:35:32.731: INFO: Created: latency-svc-h92zq
Feb 14 00:35:32.735: INFO: Got endpoints: latency-svc-h92zq [749.553967ms]
Feb 14 00:35:32.763: INFO: Created: latency-svc-7fww6
Feb 14 00:35:32.820: INFO: Got endpoints: latency-svc-7fww6 [726.245958ms]
Feb 14 00:35:32.853: INFO: Created: latency-svc-z59j4
Feb 14 00:35:32.859: INFO: Got endpoints: latency-svc-z59j4 [752.07062ms]
Feb 14 00:35:32.884: INFO: Created: latency-svc-q56g4
Feb 14 00:35:32.885: INFO: Got endpoints: latency-svc-q56g4 [740.13922ms]
Feb 14 00:35:32.903: INFO: Created: latency-svc-hwlf7
Feb 14 00:35:32.909: INFO: Got endpoints: latency-svc-hwlf7 [732.140542ms]
Feb 14 00:35:33.001: INFO: Created: latency-svc-x5vvb
Feb 14 00:35:33.027: INFO: Created: latency-svc-vhwh9
Feb 14 00:35:33.029: INFO: Got endpoints: latency-svc-x5vvb [788.142756ms]
Feb 14 00:35:33.036: INFO: Got endpoints: latency-svc-vhwh9 [749.549738ms]
Feb 14 00:35:33.061: INFO: Created: latency-svc-kh465
Feb 14 00:35:33.072: INFO: Got endpoints: latency-svc-kh465 [669.441415ms]
Feb 14 00:35:33.157: INFO: Created: latency-svc-ldrdk
Feb 14 00:35:33.165: INFO: Got endpoints: latency-svc-ldrdk [736.085275ms]
Feb 14 00:35:33.179: INFO: Created: latency-svc-vzdgv
Feb 14 00:35:33.190: INFO: Got endpoints: latency-svc-vzdgv [724.949318ms]
Feb 14 00:35:33.209: INFO: Created: latency-svc-zcpd5
Feb 14 00:35:33.223: INFO: Got endpoints: latency-svc-zcpd5 [679.422758ms]
Feb 14 00:35:33.249: INFO: Created: latency-svc-q7zw6
Feb 14 00:35:33.324: INFO: Got endpoints: latency-svc-q7zw6 [749.415902ms]
Feb 14 00:35:33.354: INFO: Created: latency-svc-xgmmt
Feb 14 00:35:33.354: INFO: Created: latency-svc-r2ttt
Feb 14 00:35:33.392: INFO: Created: latency-svc-26vdc
Feb 14 00:35:33.392: INFO: Got endpoints: latency-svc-xgmmt [727.821388ms]
Feb 14 00:35:33.396: INFO: Got endpoints: latency-svc-r2ttt [794.228485ms]
Feb 14 00:35:33.485: INFO: Got endpoints: latency-svc-26vdc [798.959092ms]
Feb 14 00:35:33.491: INFO: Created: latency-svc-7nbsh
Feb 14 00:35:33.495: INFO: Got endpoints: latency-svc-7nbsh [759.56003ms]
Feb 14 00:35:33.515: INFO: Created: latency-svc-4cjst
Feb 14 00:35:33.521: INFO: Got endpoints: latency-svc-4cjst [701.362784ms]
Feb 14 00:35:33.567: INFO: Created: latency-svc-6s8hs
Feb 14 00:35:33.575: INFO: Got endpoints: latency-svc-6s8hs [715.077479ms]
Feb 14 00:35:33.646: INFO: Created: latency-svc-ngkgk
Feb 14 00:35:33.654: INFO: Got endpoints: latency-svc-ngkgk [769.642803ms]
Feb 14 00:35:33.675: INFO: Created: latency-svc-dsl29
Feb 14 00:35:33.684: INFO: Got endpoints: latency-svc-dsl29 [775.11808ms]
Feb 14 00:35:33.712: INFO: Created: latency-svc-xr2h4
Feb 14 00:35:33.735: INFO: Got endpoints: latency-svc-xr2h4 [705.79208ms]
Feb 14 00:35:33.808: INFO: Created: latency-svc-z4lqc
Feb 14 00:35:33.844: INFO: Got endpoints: latency-svc-z4lqc [807.607709ms]
Feb 14 00:35:33.845: INFO: Created: latency-svc-cbcqx
Feb 14 00:35:33.887: INFO: Got endpoints: latency-svc-cbcqx [814.792799ms]
Feb 14 00:35:33.890: INFO: Created: latency-svc-cqjxt
Feb 14 00:35:33.958: INFO: Got endpoints: latency-svc-cqjxt [792.508907ms]
Feb 14 00:35:33.991: INFO: Created: latency-svc-v4hfw
Feb 14 00:35:33.997: INFO: Got endpoints: latency-svc-v4hfw [807.318742ms]
Feb 14 00:35:34.063: INFO: Created: latency-svc-wkrvs
Feb 14 00:35:34.126: INFO: Got endpoints: latency-svc-wkrvs [903.548777ms]
Feb 14 00:35:34.142: INFO: Created: latency-svc-bmvc2
Feb 14 00:35:34.142: INFO: Got endpoints: latency-svc-bmvc2 [817.424844ms]
Feb 14 00:35:34.179: INFO: Created: latency-svc-58rbw
Feb 14 00:35:34.209: INFO: Got endpoints: latency-svc-58rbw [816.314666ms]
Feb 14 00:35:34.210: INFO: Created: latency-svc-dqqkd
Feb 14 00:35:34.220: INFO: Got endpoints: latency-svc-dqqkd [824.022752ms]
Feb 14 00:35:34.316: INFO: Created: latency-svc-ccv2r
Feb 14 00:35:34.329: INFO: Got endpoints: latency-svc-ccv2r [844.208367ms]
Feb 14 00:35:34.354: INFO: Created: latency-svc-smnfc
Feb 14 00:35:34.358: INFO: Got endpoints: latency-svc-smnfc [863.715669ms]
Feb 14 00:35:34.377: INFO: Created: latency-svc-nl9rl
Feb 14 00:35:34.385: INFO: Got endpoints: latency-svc-nl9rl [863.996797ms]
Feb 14 00:35:34.404: INFO: Created: latency-svc-dn2p7
Feb 14 00:35:34.441: INFO: Got endpoints: latency-svc-dn2p7 [866.128616ms]
Feb 14 00:35:34.442: INFO: Created: latency-svc-xzt5j
Feb 14 00:35:34.484: INFO: Got endpoints: latency-svc-xzt5j [829.284171ms]
Feb 14 00:35:34.524: INFO: Created: latency-svc-vcpzs
Feb 14 00:35:34.530: INFO: Got endpoints: latency-svc-vcpzs [846.232039ms]
Feb 14 00:35:34.582: INFO: Created: latency-svc-59hg4
Feb 14 00:35:34.595: INFO: Got endpoints: latency-svc-59hg4 [860.113264ms]
Feb 14 00:35:34.616: INFO: Created: latency-svc-jkr8m
Feb 14 00:35:34.627: INFO: Got endpoints: latency-svc-jkr8m [782.559515ms]
Feb 14 00:35:34.651: INFO: Created: latency-svc-tkbsj
Feb 14 00:35:34.665: INFO: Got endpoints: latency-svc-tkbsj [777.119332ms]
Feb 14 00:35:34.714: INFO: Created: latency-svc-dwkmm
Feb 14 00:35:34.718: INFO: Got endpoints: latency-svc-dwkmm [759.653955ms]
Feb 14 00:35:34.760: INFO: Created: latency-svc-x92k9
Feb 14 00:35:34.780: INFO: Got endpoints: latency-svc-x92k9 [783.237723ms]
Feb 14 00:35:34.794: INFO: Created: latency-svc-nmnm5
Feb 14 00:35:34.804: INFO: Got endpoints: latency-svc-nmnm5 [676.87962ms]
Feb 14 00:35:34.842: INFO: Created: latency-svc-khrks
Feb 14 00:35:34.842: INFO: Got endpoints: latency-svc-khrks [699.988316ms]
Feb 14 00:35:34.874: INFO: Created: latency-svc-l49dd
Feb 14 00:35:34.887: INFO: Got endpoints: latency-svc-l49dd [677.815041ms]
Feb 14 00:35:34.908: INFO: Created: latency-svc-mrq8x
Feb 14 00:35:35.044: INFO: Created: latency-svc-v5qdc
Feb 14 00:35:35.051: INFO: Got endpoints: latency-svc-mrq8x [830.575987ms]
Feb 14 00:35:35.069: INFO: Got endpoints: latency-svc-v5qdc [739.118012ms]
Feb 14 00:35:35.076: INFO: Created: latency-svc-vld5j
Feb 14 00:35:35.094: INFO: Got endpoints: latency-svc-vld5j [735.763368ms]
Feb 14 00:35:35.131: INFO: Created: latency-svc-dj2qk
Feb 14 00:35:35.141: INFO: Got endpoints: latency-svc-dj2qk [755.177798ms]
Feb 14 00:35:35.182: INFO: Created: latency-svc-qdrs7
Feb 14 00:35:35.194: INFO: Got endpoints: latency-svc-qdrs7 [752.894535ms]
Feb 14 00:35:35.219: INFO: Created: latency-svc-g97wg
Feb 14 00:35:35.227: INFO: Got endpoints: latency-svc-g97wg [743.307757ms]
Feb 14 00:35:35.228: INFO: Latencies: [53.10381ms 103.600252ms 132.130669ms 225.18357ms 247.409601ms 267.583532ms 390.118355ms 398.673316ms 434.628451ms 475.885396ms 552.61083ms 585.148474ms 586.215003ms 608.255699ms 615.15704ms 621.656741ms 625.160389ms 625.321621ms 657.384112ms 665.985306ms 666.191454ms 669.441415ms 676.87962ms 677.815041ms 678.506731ms 678.536519ms 679.422758ms 685.477536ms 690.109803ms 691.01835ms 692.847415ms 696.600717ms 699.988316ms 701.197889ms 701.362784ms 702.242622ms 705.79208ms 706.378551ms 706.446489ms 707.531476ms 715.077479ms 715.52116ms 715.605293ms 717.565452ms 724.147448ms 724.949318ms 725.860881ms 726.245958ms 727.821388ms 728.711369ms 730.952579ms 732.140542ms 733.464423ms 733.590656ms 735.250129ms 735.763368ms 736.085275ms 736.365139ms 737.187242ms 738.050602ms 738.627325ms 739.118012ms 740.13922ms 742.683711ms 743.307757ms 745.390512ms 745.607226ms 748.42535ms 749.415902ms 749.549738ms 749.553967ms 750.687228ms 752.07062ms 752.894535ms 754.503154ms 755.177798ms 759.461936ms 759.533189ms 759.56003ms 759.653955ms 765.453994ms 769.642803ms 775.11808ms 777.119332ms 777.72461ms 780.40703ms 781.812059ms 782.559515ms 783.237723ms 783.68914ms 788.142756ms 790.975264ms 792.508907ms 792.911126ms 794.228485ms 794.565165ms 796.445767ms 796.521908ms 798.959092ms 799.057637ms 804.918203ms 806.929048ms 807.318742ms 807.607709ms 811.399817ms 811.738981ms 812.679422ms 813.004881ms 814.259296ms 814.792799ms 816.314666ms 817.424844ms 818.309432ms 820.000213ms 820.955257ms 820.966104ms 823.501967ms 824.022752ms 829.284171ms 830.575987ms 838.924885ms 840.876021ms 842.439786ms 844.208367ms 844.629442ms 846.232039ms 849.154181ms 851.886458ms 858.117636ms 858.135451ms 858.934949ms 860.113264ms 863.715669ms 863.996797ms 865.342371ms 866.128616ms 868.48285ms 876.936783ms 884.207832ms 898.739524ms 903.548777ms 905.928838ms 911.840299ms 919.672092ms 925.23418ms 932.347557ms 932.842703ms 936.336457ms 939.301201ms 939.733212ms 943.594008ms 944.781769ms 952.629217ms 954.111566ms 956.858451ms 965.934967ms 970.997591ms 973.051336ms 977.766775ms 979.005916ms 983.177246ms 1.020410974s 1.035650274s 1.036853067s 1.055552912s 1.056646752s 1.062326946s 1.065557137s 1.071253477s 1.090899938s 1.107309717s 1.116698796s 1.123700223s 1.129659373s 1.132968742s 1.141395016s 1.152538631s 1.152558914s 1.16075163s 1.164505944s 1.19564532s 1.238557253s 1.238794881s 2.069896951s 2.087637062s 2.137471247s 2.165481314s 2.948712916s 2.982719716s 3.030558418s 3.113924717s 3.160892067s 3.166344618s 3.192666717s 3.204134786s 3.25536061s 3.283227809s 3.289873689s 3.297193605s 3.307610522s]
Feb 14 00:35:35.228: INFO: 50 %ile: 804.918203ms
Feb 14 00:35:35.228: INFO: 90 %ile: 1.19564532s
Feb 14 00:35:35.228: INFO: 99 %ile: 3.297193605s
Feb 14 00:35:35.228: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:35:35.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-9195" for this suite.

• [SLOW TEST:20.924 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":280,"completed":117,"skipped":1812,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:35:35.253: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: getting the auto-created API token
STEP: reading a file in the container
Feb 14 00:35:45.896: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1839 pod-service-account-b62c3785-02a9-4694-a6af-7add891e0369 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Feb 14 00:35:46.819: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1839 pod-service-account-b62c3785-02a9-4694-a6af-7add891e0369 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Feb 14 00:35:47.259: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-1839 pod-service-account-b62c3785-02a9-4694-a6af-7add891e0369 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:35:47.713: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-1839" for this suite.

• [SLOW TEST:12.513 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":280,"completed":118,"skipped":1820,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:35:47.768: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-acd2f5bf-1020-4f34-87c6-8d1653ca5118
STEP: Creating a pod to test consume configMaps
Feb 14 00:35:48.051: INFO: Waiting up to 5m0s for pod "pod-configmaps-1075fb51-07fa-4d04-b9fa-ac8c5e428fef" in namespace "configmap-5233" to be "success or failure"
Feb 14 00:35:48.072: INFO: Pod "pod-configmaps-1075fb51-07fa-4d04-b9fa-ac8c5e428fef": Phase="Pending", Reason="", readiness=false. Elapsed: 21.101132ms
Feb 14 00:35:50.110: INFO: Pod "pod-configmaps-1075fb51-07fa-4d04-b9fa-ac8c5e428fef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.059414503s
Feb 14 00:35:52.169: INFO: Pod "pod-configmaps-1075fb51-07fa-4d04-b9fa-ac8c5e428fef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118184037s
Feb 14 00:35:54.187: INFO: Pod "pod-configmaps-1075fb51-07fa-4d04-b9fa-ac8c5e428fef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.136765533s
Feb 14 00:35:56.230: INFO: Pod "pod-configmaps-1075fb51-07fa-4d04-b9fa-ac8c5e428fef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.179359432s
Feb 14 00:35:58.263: INFO: Pod "pod-configmaps-1075fb51-07fa-4d04-b9fa-ac8c5e428fef": Phase="Pending", Reason="", readiness=false. Elapsed: 10.212764144s
Feb 14 00:36:00.300: INFO: Pod "pod-configmaps-1075fb51-07fa-4d04-b9fa-ac8c5e428fef": Phase="Pending", Reason="", readiness=false. Elapsed: 12.249368128s
Feb 14 00:36:02.312: INFO: Pod "pod-configmaps-1075fb51-07fa-4d04-b9fa-ac8c5e428fef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.261883697s
STEP: Saw pod success
Feb 14 00:36:02.313: INFO: Pod "pod-configmaps-1075fb51-07fa-4d04-b9fa-ac8c5e428fef" satisfied condition "success or failure"
Feb 14 00:36:02.323: INFO: Trying to get logs from node jerma-node pod pod-configmaps-1075fb51-07fa-4d04-b9fa-ac8c5e428fef container configmap-volume-test: 
STEP: delete the pod
Feb 14 00:36:02.428: INFO: Waiting for pod pod-configmaps-1075fb51-07fa-4d04-b9fa-ac8c5e428fef to disappear
Feb 14 00:36:02.477: INFO: Pod pod-configmaps-1075fb51-07fa-4d04-b9fa-ac8c5e428fef no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:36:02.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5233" for this suite.

• [SLOW TEST:14.816 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":119,"skipped":1826,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:36:02.591: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-projected-all-test-volume-4de89406-3298-4348-9346-04367be500c1
STEP: Creating secret with name secret-projected-all-test-volume-2b62f64d-cdd0-4ce7-85fa-e31363f8d734
STEP: Creating a pod to test Check all projections for projected volume plugin
Feb 14 00:36:02.897: INFO: Waiting up to 5m0s for pod "projected-volume-34001454-9881-4a74-ac4b-f8893153b0e0" in namespace "projected-9358" to be "success or failure"
Feb 14 00:36:02.935: INFO: Pod "projected-volume-34001454-9881-4a74-ac4b-f8893153b0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 37.474614ms
Feb 14 00:36:04.953: INFO: Pod "projected-volume-34001454-9881-4a74-ac4b-f8893153b0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056089834s
Feb 14 00:36:07.057: INFO: Pod "projected-volume-34001454-9881-4a74-ac4b-f8893153b0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.159577056s
Feb 14 00:36:09.062: INFO: Pod "projected-volume-34001454-9881-4a74-ac4b-f8893153b0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.164465814s
Feb 14 00:36:11.163: INFO: Pod "projected-volume-34001454-9881-4a74-ac4b-f8893153b0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.265788263s
Feb 14 00:36:13.461: INFO: Pod "projected-volume-34001454-9881-4a74-ac4b-f8893153b0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.563278188s
Feb 14 00:36:15.508: INFO: Pod "projected-volume-34001454-9881-4a74-ac4b-f8893153b0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 12.610820548s
Feb 14 00:36:17.553: INFO: Pod "projected-volume-34001454-9881-4a74-ac4b-f8893153b0e0": Phase="Pending", Reason="", readiness=false. Elapsed: 14.656200937s
Feb 14 00:36:19.562: INFO: Pod "projected-volume-34001454-9881-4a74-ac4b-f8893153b0e0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.665031032s
STEP: Saw pod success
Feb 14 00:36:19.563: INFO: Pod "projected-volume-34001454-9881-4a74-ac4b-f8893153b0e0" satisfied condition "success or failure"
Feb 14 00:36:19.568: INFO: Trying to get logs from node jerma-node pod projected-volume-34001454-9881-4a74-ac4b-f8893153b0e0 container projected-all-volume-test: 
STEP: delete the pod
Feb 14 00:36:19.707: INFO: Waiting for pod projected-volume-34001454-9881-4a74-ac4b-f8893153b0e0 to disappear
Feb 14 00:36:19.721: INFO: Pod projected-volume-34001454-9881-4a74-ac4b-f8893153b0e0 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:36:19.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9358" for this suite.

• [SLOW TEST:17.149 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":280,"completed":120,"skipped":1857,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:36:19.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 14 00:36:19.885: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e1f56868-63f0-4d6b-9e05-a71b3ca13364" in namespace "downward-api-1838" to be "success or failure"
Feb 14 00:36:19.897: INFO: Pod "downwardapi-volume-e1f56868-63f0-4d6b-9e05-a71b3ca13364": Phase="Pending", Reason="", readiness=false. Elapsed: 11.785766ms
Feb 14 00:36:21.912: INFO: Pod "downwardapi-volume-e1f56868-63f0-4d6b-9e05-a71b3ca13364": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026948579s
Feb 14 00:36:23.925: INFO: Pod "downwardapi-volume-e1f56868-63f0-4d6b-9e05-a71b3ca13364": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039768255s
Feb 14 00:36:25.933: INFO: Pod "downwardapi-volume-e1f56868-63f0-4d6b-9e05-a71b3ca13364": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047801199s
Feb 14 00:36:27.946: INFO: Pod "downwardapi-volume-e1f56868-63f0-4d6b-9e05-a71b3ca13364": Phase="Pending", Reason="", readiness=false. Elapsed: 8.060339324s
Feb 14 00:36:29.959: INFO: Pod "downwardapi-volume-e1f56868-63f0-4d6b-9e05-a71b3ca13364": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073910035s
STEP: Saw pod success
Feb 14 00:36:29.960: INFO: Pod "downwardapi-volume-e1f56868-63f0-4d6b-9e05-a71b3ca13364" satisfied condition "success or failure"
Feb 14 00:36:29.970: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e1f56868-63f0-4d6b-9e05-a71b3ca13364 container client-container: 
STEP: delete the pod
Feb 14 00:36:30.024: INFO: Waiting for pod downwardapi-volume-e1f56868-63f0-4d6b-9e05-a71b3ca13364 to disappear
Feb 14 00:36:30.044: INFO: Pod downwardapi-volume-e1f56868-63f0-4d6b-9e05-a71b3ca13364 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:36:30.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1838" for this suite.

• [SLOW TEST:10.358 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":280,"completed":121,"skipped":1865,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:36:30.102: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 00:36:30.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Feb 14 00:36:34.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3186 create -f -'
Feb 14 00:36:37.366: INFO: stderr: ""
Feb 14 00:36:37.366: INFO: stdout: "e2e-test-crd-publish-openapi-7233-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Feb 14 00:36:37.366: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3186 delete e2e-test-crd-publish-openapi-7233-crds test-foo'
Feb 14 00:36:37.543: INFO: stderr: ""
Feb 14 00:36:37.543: INFO: stdout: "e2e-test-crd-publish-openapi-7233-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Feb 14 00:36:37.543: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3186 apply -f -'
Feb 14 00:36:37.935: INFO: stderr: ""
Feb 14 00:36:37.935: INFO: stdout: "e2e-test-crd-publish-openapi-7233-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Feb 14 00:36:37.936: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3186 delete e2e-test-crd-publish-openapi-7233-crds test-foo'
Feb 14 00:36:38.093: INFO: stderr: ""
Feb 14 00:36:38.094: INFO: stdout: "e2e-test-crd-publish-openapi-7233-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Feb 14 00:36:38.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3186 create -f -'
Feb 14 00:36:38.470: INFO: rc: 1
Feb 14 00:36:38.472: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3186 apply -f -'
Feb 14 00:36:38.824: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Feb 14 00:36:38.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3186 create -f -'
Feb 14 00:36:39.203: INFO: rc: 1
Feb 14 00:36:39.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-3186 apply -f -'
Feb 14 00:36:39.656: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Feb 14 00:36:39.657: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7233-crds'
Feb 14 00:36:40.105: INFO: stderr: ""
Feb 14 00:36:40.105: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7233-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Feb 14 00:36:40.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7233-crds.metadata'
Feb 14 00:36:40.555: INFO: stderr: ""
Feb 14 00:36:40.556: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7233-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Feb 14 00:36:40.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7233-crds.spec'
Feb 14 00:36:40.968: INFO: stderr: ""
Feb 14 00:36:40.968: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7233-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Feb 14 00:36:40.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7233-crds.spec.bars'
Feb 14 00:36:41.298: INFO: stderr: ""
Feb 14 00:36:41.298: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-7233-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Feb 14 00:36:41.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7233-crds.spec.bars2'
Feb 14 00:36:41.682: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:36:45.533: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3186" for this suite.

• [SLOW TEST:15.443 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":280,"completed":122,"skipped":1894,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:36:45.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-2e8ee1de-00f0-4efd-86a4-5fd66ddf8098 in namespace container-probe-5973
Feb 14 00:36:55.717: INFO: Started pod busybox-2e8ee1de-00f0-4efd-86a4-5fd66ddf8098 in namespace container-probe-5973
STEP: checking the pod's current state and verifying that restartCount is present
Feb 14 00:36:55.722: INFO: Initial restart count of pod busybox-2e8ee1de-00f0-4efd-86a4-5fd66ddf8098 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:40:55.925: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5973" for this suite.

• [SLOW TEST:250.419 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":123,"skipped":1908,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:40:55.965: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: set up a multi version CRD
Feb 14 00:40:56.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:41:18.715: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1689" for this suite.

• [SLOW TEST:22.829 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":280,"completed":124,"skipped":1913,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:41:18.795: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Feb 14 00:41:18.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Feb 14 00:41:32.676: INFO: >>> kubeConfig: /root/.kube/config
Feb 14 00:41:36.588: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:41:50.403: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-892" for this suite.

• [SLOW TEST:31.617 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":280,"completed":125,"skipped":1913,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:41:50.414: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1899
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 14 00:41:50.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-7179'
Feb 14 00:41:50.789: INFO: stderr: ""
Feb 14 00:41:50.789: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Feb 14 00:42:00.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-7179 -o json'
Feb 14 00:42:01.029: INFO: stderr: ""
Feb 14 00:42:01.029: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-02-14T00:41:50Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-7179\",\n        \"resourceVersion\": \"8275711\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-7179/pods/e2e-test-httpd-pod\",\n        \"uid\": \"a3a74fa7-4954-43b4-9148-0603c92942ab\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-s49hj\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-s49hj\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-s49hj\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-14T00:41:50Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-14T00:41:59Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-14T00:41:59Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-02-14T00:41:50Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://ce7b4b858833f8c3dbca221248f61b9805334bcd42b25b7bdb5ddf5871f154f0\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-02-14T00:41:58Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.1\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-02-14T00:41:50Z\"\n    }\n}\n"
STEP: replace the image in the pod
Feb 14 00:42:01.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-7179'
Feb 14 00:42:01.659: INFO: stderr: ""
Feb 14 00:42:01.660: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1904
Feb 14 00:42:01.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-7179'
Feb 14 00:42:06.863: INFO: stderr: ""
Feb 14 00:42:06.864: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:42:06.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7179" for this suite.

• [SLOW TEST:16.468 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1895
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":280,"completed":126,"skipped":1926,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:42:06.884: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 14 00:42:08.221: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 14 00:42:10.244: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717237728, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717237728, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717237728, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717237728, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:42:12.259: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717237728, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717237728, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717237728, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717237728, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:42:14.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717237728, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717237728, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717237728, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717237728, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 14 00:42:17.300: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:42:19.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7832" for this suite.
STEP: Destroying namespace "webhook-7832-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.357 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":280,"completed":127,"skipped":1958,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:42:19.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8568.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8568.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8568.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8568.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 14 00:42:41.419: INFO: DNS probes using dns-test-2b0e3933-01ec-47e7-aedb-010f13c7d549 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8568.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-8568.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8568.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-8568.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 14 00:42:59.607: INFO: File wheezy_udp@dns-test-service-3.dns-8568.svc.cluster.local from pod  dns-8568/dns-test-3e37e226-a604-4c9c-9940-a7699284f4ae contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 14 00:42:59.615: INFO: File jessie_udp@dns-test-service-3.dns-8568.svc.cluster.local from pod  dns-8568/dns-test-3e37e226-a604-4c9c-9940-a7699284f4ae contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 14 00:42:59.615: INFO: Lookups using dns-8568/dns-test-3e37e226-a604-4c9c-9940-a7699284f4ae failed for: [wheezy_udp@dns-test-service-3.dns-8568.svc.cluster.local jessie_udp@dns-test-service-3.dns-8568.svc.cluster.local]

Feb 14 00:43:04.662: INFO: File wheezy_udp@dns-test-service-3.dns-8568.svc.cluster.local from pod  dns-8568/dns-test-3e37e226-a604-4c9c-9940-a7699284f4ae contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 14 00:43:04.668: INFO: File jessie_udp@dns-test-service-3.dns-8568.svc.cluster.local from pod  dns-8568/dns-test-3e37e226-a604-4c9c-9940-a7699284f4ae contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 14 00:43:04.668: INFO: Lookups using dns-8568/dns-test-3e37e226-a604-4c9c-9940-a7699284f4ae failed for: [wheezy_udp@dns-test-service-3.dns-8568.svc.cluster.local jessie_udp@dns-test-service-3.dns-8568.svc.cluster.local]

Feb 14 00:43:09.640: INFO: File jessie_udp@dns-test-service-3.dns-8568.svc.cluster.local from pod  dns-8568/dns-test-3e37e226-a604-4c9c-9940-a7699284f4ae contains 'foo.example.com.
' instead of 'bar.example.com.'
Feb 14 00:43:09.640: INFO: Lookups using dns-8568/dns-test-3e37e226-a604-4c9c-9940-a7699284f4ae failed for: [jessie_udp@dns-test-service-3.dns-8568.svc.cluster.local]

Feb 14 00:43:14.637: INFO: DNS probes using dns-test-3e37e226-a604-4c9c-9940-a7699284f4ae succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8568.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-8568.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-8568.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-8568.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 14 00:43:29.028: INFO: DNS probes using dns-test-a9a01148-d3b2-4811-9e1f-86465129c39a succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:43:29.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8568" for this suite.

• [SLOW TEST:69.906 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":280,"completed":128,"skipped":1966,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:43:29.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-7786
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a new StatefulSet
Feb 14 00:43:29.335: INFO: Found 0 stateful pods, waiting for 3
Feb 14 00:43:39.407: INFO: Found 1 stateful pods, waiting for 3
Feb 14 00:43:49.653: INFO: Found 2 stateful pods, waiting for 3
Feb 14 00:43:59.347: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 00:43:59.348: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 00:43:59.348: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false
Feb 14 00:44:09.346: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 00:44:09.346: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 00:44:09.346: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 00:44:09.362: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7786 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 14 00:44:09.888: INFO: stderr: "I0214 00:44:09.512576    2865 log.go:172] (0xc00051e2c0) (0xc000725680) Create stream\nI0214 00:44:09.512770    2865 log.go:172] (0xc00051e2c0) (0xc000725680) Stream added, broadcasting: 1\nI0214 00:44:09.516262    2865 log.go:172] (0xc00051e2c0) Reply frame received for 1\nI0214 00:44:09.516306    2865 log.go:172] (0xc00051e2c0) (0xc0008da0a0) Create stream\nI0214 00:44:09.516318    2865 log.go:172] (0xc00051e2c0) (0xc0008da0a0) Stream added, broadcasting: 3\nI0214 00:44:09.518034    2865 log.go:172] (0xc00051e2c0) Reply frame received for 3\nI0214 00:44:09.518120    2865 log.go:172] (0xc00051e2c0) (0xc000725720) Create stream\nI0214 00:44:09.518133    2865 log.go:172] (0xc00051e2c0) (0xc000725720) Stream added, broadcasting: 5\nI0214 00:44:09.519292    2865 log.go:172] (0xc00051e2c0) Reply frame received for 5\nI0214 00:44:09.598259    2865 log.go:172] (0xc00051e2c0) Data frame received for 5\nI0214 00:44:09.598430    2865 log.go:172] (0xc000725720) (5) Data frame handling\nI0214 00:44:09.598462    2865 log.go:172] (0xc000725720) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0214 00:44:09.721774    2865 log.go:172] (0xc00051e2c0) Data frame received for 3\nI0214 00:44:09.721861    2865 log.go:172] (0xc0008da0a0) (3) Data frame handling\nI0214 00:44:09.721892    2865 log.go:172] (0xc0008da0a0) (3) Data frame sent\nI0214 00:44:09.869861    2865 log.go:172] (0xc00051e2c0) Data frame received for 1\nI0214 00:44:09.870393    2865 log.go:172] (0xc00051e2c0) (0xc0008da0a0) Stream removed, broadcasting: 3\nI0214 00:44:09.870476    2865 log.go:172] (0xc000725680) (1) Data frame handling\nI0214 00:44:09.870538    2865 log.go:172] (0xc000725680) (1) Data frame sent\nI0214 00:44:09.870665    2865 log.go:172] (0xc00051e2c0) (0xc000725720) Stream removed, broadcasting: 5\nI0214 00:44:09.870856    2865 log.go:172] (0xc00051e2c0) (0xc000725680) Stream removed, broadcasting: 1\nI0214 00:44:09.871025    2865 log.go:172] (0xc00051e2c0) Go away received\nI0214 00:44:09.871965    2865 log.go:172] (0xc00051e2c0) (0xc000725680) Stream removed, broadcasting: 1\nI0214 00:44:09.872051    2865 log.go:172] (0xc00051e2c0) (0xc0008da0a0) Stream removed, broadcasting: 3\nI0214 00:44:09.872070    2865 log.go:172] (0xc00051e2c0) (0xc000725720) Stream removed, broadcasting: 5\n"
Feb 14 00:44:09.889: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 14 00:44:09.889: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb 14 00:44:19.974: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Feb 14 00:44:30.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7786 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 14 00:44:30.381: INFO: stderr: "I0214 00:44:30.216796    2885 log.go:172] (0xc000902000) (0xc000a48140) Create stream\nI0214 00:44:30.217010    2885 log.go:172] (0xc000902000) (0xc000a48140) Stream added, broadcasting: 1\nI0214 00:44:30.220120    2885 log.go:172] (0xc000902000) Reply frame received for 1\nI0214 00:44:30.220161    2885 log.go:172] (0xc000902000) (0xc0008b0000) Create stream\nI0214 00:44:30.220184    2885 log.go:172] (0xc000902000) (0xc0008b0000) Stream added, broadcasting: 3\nI0214 00:44:30.221545    2885 log.go:172] (0xc000902000) Reply frame received for 3\nI0214 00:44:30.221566    2885 log.go:172] (0xc000902000) (0xc0008ae000) Create stream\nI0214 00:44:30.221578    2885 log.go:172] (0xc000902000) (0xc0008ae000) Stream added, broadcasting: 5\nI0214 00:44:30.223038    2885 log.go:172] (0xc000902000) Reply frame received for 5\nI0214 00:44:30.285448    2885 log.go:172] (0xc000902000) Data frame received for 3\nI0214 00:44:30.285558    2885 log.go:172] (0xc0008b0000) (3) Data frame handling\nI0214 00:44:30.285593    2885 log.go:172] (0xc0008b0000) (3) Data frame sent\nI0214 00:44:30.285614    2885 log.go:172] (0xc000902000) Data frame received for 5\nI0214 00:44:30.285631    2885 log.go:172] (0xc0008ae000) (5) Data frame handling\nI0214 00:44:30.285656    2885 log.go:172] (0xc0008ae000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0214 00:44:30.367612    2885 log.go:172] (0xc000902000) Data frame received for 1\nI0214 00:44:30.367712    2885 log.go:172] (0xc000a48140) (1) Data frame handling\nI0214 00:44:30.367752    2885 log.go:172] (0xc000a48140) (1) Data frame sent\nI0214 00:44:30.368021    2885 log.go:172] (0xc000902000) (0xc000a48140) Stream removed, broadcasting: 1\nI0214 00:44:30.368377    2885 log.go:172] (0xc000902000) (0xc0008b0000) Stream removed, broadcasting: 3\nI0214 00:44:30.368497    2885 log.go:172] (0xc000902000) (0xc0008ae000) Stream removed, broadcasting: 5\nI0214 00:44:30.368583    2885 log.go:172] (0xc000902000) Go away received\nI0214 00:44:30.368845    2885 log.go:172] (0xc000902000) (0xc000a48140) Stream removed, broadcasting: 1\nI0214 00:44:30.368860    2885 log.go:172] (0xc000902000) (0xc0008b0000) Stream removed, broadcasting: 3\nI0214 00:44:30.368869    2885 log.go:172] (0xc000902000) (0xc0008ae000) Stream removed, broadcasting: 5\n"
Feb 14 00:44:30.381: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 14 00:44:30.381: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 14 00:44:40.422: INFO: Waiting for StatefulSet statefulset-7786/ss2 to complete update
Feb 14 00:44:40.422: INFO: Waiting for Pod statefulset-7786/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 14 00:44:40.422: INFO: Waiting for Pod statefulset-7786/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 14 00:44:40.422: INFO: Waiting for Pod statefulset-7786/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 14 00:44:50.439: INFO: Waiting for StatefulSet statefulset-7786/ss2 to complete update
Feb 14 00:44:50.439: INFO: Waiting for Pod statefulset-7786/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 14 00:44:50.439: INFO: Waiting for Pod statefulset-7786/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 14 00:45:00.445: INFO: Waiting for StatefulSet statefulset-7786/ss2 to complete update
Feb 14 00:45:00.445: INFO: Waiting for Pod statefulset-7786/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 14 00:45:00.445: INFO: Waiting for Pod statefulset-7786/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 14 00:45:10.621: INFO: Waiting for StatefulSet statefulset-7786/ss2 to complete update
Feb 14 00:45:10.621: INFO: Waiting for Pod statefulset-7786/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 14 00:45:20.465: INFO: Waiting for StatefulSet statefulset-7786/ss2 to complete update
STEP: Rolling back to a previous revision
Feb 14 00:45:30.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7786 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 14 00:45:30.898: INFO: stderr: "I0214 00:45:30.646123    2905 log.go:172] (0xc0007e4b00) (0xc0006ac460) Create stream\nI0214 00:45:30.646372    2905 log.go:172] (0xc0007e4b00) (0xc0006ac460) Stream added, broadcasting: 1\nI0214 00:45:30.650450    2905 log.go:172] (0xc0007e4b00) Reply frame received for 1\nI0214 00:45:30.650485    2905 log.go:172] (0xc0007e4b00) (0xc0006ac500) Create stream\nI0214 00:45:30.650492    2905 log.go:172] (0xc0007e4b00) (0xc0006ac500) Stream added, broadcasting: 3\nI0214 00:45:30.651411    2905 log.go:172] (0xc0007e4b00) Reply frame received for 3\nI0214 00:45:30.651427    2905 log.go:172] (0xc0007e4b00) (0xc0007fa320) Create stream\nI0214 00:45:30.651442    2905 log.go:172] (0xc0007e4b00) (0xc0007fa320) Stream added, broadcasting: 5\nI0214 00:45:30.652133    2905 log.go:172] (0xc0007e4b00) Reply frame received for 5\nI0214 00:45:30.732731    2905 log.go:172] (0xc0007e4b00) Data frame received for 5\nI0214 00:45:30.732812    2905 log.go:172] (0xc0007fa320) (5) Data frame handling\nI0214 00:45:30.732840    2905 log.go:172] (0xc0007fa320) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0214 00:45:30.755854    2905 log.go:172] (0xc0007e4b00) Data frame received for 3\nI0214 00:45:30.755916    2905 log.go:172] (0xc0006ac500) (3) Data frame handling\nI0214 00:45:30.755931    2905 log.go:172] (0xc0006ac500) (3) Data frame sent\nI0214 00:45:30.877364    2905 log.go:172] (0xc0007e4b00) (0xc0006ac500) Stream removed, broadcasting: 3\nI0214 00:45:30.878060    2905 log.go:172] (0xc0007e4b00) Data frame received for 1\nI0214 00:45:30.878200    2905 log.go:172] (0xc0006ac460) (1) Data frame handling\nI0214 00:45:30.878266    2905 log.go:172] (0xc0006ac460) (1) Data frame sent\nI0214 00:45:30.878304    2905 log.go:172] (0xc0007e4b00) (0xc0007fa320) Stream removed, broadcasting: 5\nI0214 00:45:30.878645    2905 log.go:172] (0xc0007e4b00) (0xc0006ac460) Stream removed, broadcasting: 1\nI0214 00:45:30.878710    2905 log.go:172] (0xc0007e4b00) Go away received\nI0214 00:45:30.880610    2905 log.go:172] (0xc0007e4b00) (0xc0006ac460) Stream removed, broadcasting: 1\nI0214 00:45:30.880659    2905 log.go:172] (0xc0007e4b00) (0xc0006ac500) Stream removed, broadcasting: 3\nI0214 00:45:30.880671    2905 log.go:172] (0xc0007e4b00) (0xc0007fa320) Stream removed, broadcasting: 5\n"
Feb 14 00:45:30.898: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 14 00:45:30.898: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 14 00:45:40.958: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Feb 14 00:45:51.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7786 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 14 00:45:51.461: INFO: stderr: "I0214 00:45:51.260060    2921 log.go:172] (0xc00067a9a0) (0xc000635ea0) Create stream\nI0214 00:45:51.260360    2921 log.go:172] (0xc00067a9a0) (0xc000635ea0) Stream added, broadcasting: 1\nI0214 00:45:51.263271    2921 log.go:172] (0xc00067a9a0) Reply frame received for 1\nI0214 00:45:51.263311    2921 log.go:172] (0xc00067a9a0) (0xc000608780) Create stream\nI0214 00:45:51.263319    2921 log.go:172] (0xc00067a9a0) (0xc000608780) Stream added, broadcasting: 3\nI0214 00:45:51.264459    2921 log.go:172] (0xc00067a9a0) Reply frame received for 3\nI0214 00:45:51.264487    2921 log.go:172] (0xc00067a9a0) (0xc000487400) Create stream\nI0214 00:45:51.264499    2921 log.go:172] (0xc00067a9a0) (0xc000487400) Stream added, broadcasting: 5\nI0214 00:45:51.265854    2921 log.go:172] (0xc00067a9a0) Reply frame received for 5\nI0214 00:45:51.344616    2921 log.go:172] (0xc00067a9a0) Data frame received for 5\nI0214 00:45:51.344712    2921 log.go:172] (0xc000487400) (5) Data frame handling\nI0214 00:45:51.344741    2921 log.go:172] (0xc000487400) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0214 00:45:51.344767    2921 log.go:172] (0xc00067a9a0) Data frame received for 3\nI0214 00:45:51.344774    2921 log.go:172] (0xc000608780) (3) Data frame handling\nI0214 00:45:51.344784    2921 log.go:172] (0xc000608780) (3) Data frame sent\nI0214 00:45:51.449102    2921 log.go:172] (0xc00067a9a0) Data frame received for 1\nI0214 00:45:51.449693    2921 log.go:172] (0xc00067a9a0) (0xc000487400) Stream removed, broadcasting: 5\nI0214 00:45:51.449768    2921 log.go:172] (0xc000635ea0) (1) Data frame handling\nI0214 00:45:51.449800    2921 log.go:172] (0xc000635ea0) (1) Data frame sent\nI0214 00:45:51.449876    2921 log.go:172] (0xc00067a9a0) (0xc000608780) Stream removed, broadcasting: 3\nI0214 00:45:51.449953    2921 log.go:172] (0xc00067a9a0) (0xc000635ea0) Stream removed, broadcasting: 1\nI0214 00:45:51.449974    2921 log.go:172] (0xc00067a9a0) Go away received\nI0214 00:45:51.451379    2921 log.go:172] (0xc00067a9a0) (0xc000635ea0) Stream removed, broadcasting: 1\nI0214 00:45:51.451391    2921 log.go:172] (0xc00067a9a0) (0xc000608780) Stream removed, broadcasting: 3\nI0214 00:45:51.451395    2921 log.go:172] (0xc00067a9a0) (0xc000487400) Stream removed, broadcasting: 5\n"
Feb 14 00:45:51.462: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 14 00:45:51.462: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 14 00:46:01.496: INFO: Waiting for StatefulSet statefulset-7786/ss2 to complete update
Feb 14 00:46:01.496: INFO: Waiting for Pod statefulset-7786/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 14 00:46:01.496: INFO: Waiting for Pod statefulset-7786/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 14 00:46:01.496: INFO: Waiting for Pod statefulset-7786/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 14 00:46:11.510: INFO: Waiting for StatefulSet statefulset-7786/ss2 to complete update
Feb 14 00:46:11.510: INFO: Waiting for Pod statefulset-7786/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 14 00:46:11.510: INFO: Waiting for Pod statefulset-7786/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 14 00:46:21.513: INFO: Waiting for StatefulSet statefulset-7786/ss2 to complete update
Feb 14 00:46:21.514: INFO: Waiting for Pod statefulset-7786/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 14 00:46:21.514: INFO: Waiting for Pod statefulset-7786/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 14 00:46:31.513: INFO: Waiting for StatefulSet statefulset-7786/ss2 to complete update
Feb 14 00:46:31.513: INFO: Waiting for Pod statefulset-7786/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Feb 14 00:46:41.513: INFO: Waiting for StatefulSet statefulset-7786/ss2 to complete update
Feb 14 00:46:41.513: INFO: Waiting for Pod statefulset-7786/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 14 00:46:51.514: INFO: Deleting all statefulset in ns statefulset-7786
Feb 14 00:46:51.518: INFO: Scaling statefulset ss2 to 0
Feb 14 00:47:31.594: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 00:47:31.598: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:47:31.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7786" for this suite.

• [SLOW TEST:242.497 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":280,"completed":129,"skipped":1992,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:47:31.651: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Feb 14 00:47:31.895: INFO: Waiting up to 5m0s for pod "pod-3ad7e477-d759-4642-8912-88459ae0cce8" in namespace "emptydir-374" to be "success or failure"
Feb 14 00:47:31.934: INFO: Pod "pod-3ad7e477-d759-4642-8912-88459ae0cce8": Phase="Pending", Reason="", readiness=false. Elapsed: 38.300781ms
Feb 14 00:47:33.942: INFO: Pod "pod-3ad7e477-d759-4642-8912-88459ae0cce8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046191167s
Feb 14 00:47:35.966: INFO: Pod "pod-3ad7e477-d759-4642-8912-88459ae0cce8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.070796482s
Feb 14 00:47:37.977: INFO: Pod "pod-3ad7e477-d759-4642-8912-88459ae0cce8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08204988s
Feb 14 00:47:39.990: INFO: Pod "pod-3ad7e477-d759-4642-8912-88459ae0cce8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.095062814s
Feb 14 00:47:42.003: INFO: Pod "pod-3ad7e477-d759-4642-8912-88459ae0cce8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.107304089s
STEP: Saw pod success
Feb 14 00:47:42.003: INFO: Pod "pod-3ad7e477-d759-4642-8912-88459ae0cce8" satisfied condition "success or failure"
Feb 14 00:47:42.009: INFO: Trying to get logs from node jerma-node pod pod-3ad7e477-d759-4642-8912-88459ae0cce8 container test-container: 
STEP: delete the pod
Feb 14 00:47:42.229: INFO: Waiting for pod pod-3ad7e477-d759-4642-8912-88459ae0cce8 to disappear
Feb 14 00:47:42.258: INFO: Pod pod-3ad7e477-d759-4642-8912-88459ae0cce8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:47:42.259: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-374" for this suite.

• [SLOW TEST:10.631 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":130,"skipped":2004,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:47:42.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 14 00:47:42.411: INFO: Waiting up to 5m0s for pod "downward-api-472ca341-957a-49d1-9723-741698027170" in namespace "downward-api-8225" to be "success or failure"
Feb 14 00:47:42.418: INFO: Pod "downward-api-472ca341-957a-49d1-9723-741698027170": Phase="Pending", Reason="", readiness=false. Elapsed: 6.570648ms
Feb 14 00:47:44.430: INFO: Pod "downward-api-472ca341-957a-49d1-9723-741698027170": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018294939s
Feb 14 00:47:46.473: INFO: Pod "downward-api-472ca341-957a-49d1-9723-741698027170": Phase="Pending", Reason="", readiness=false. Elapsed: 4.06156032s
Feb 14 00:47:48.534: INFO: Pod "downward-api-472ca341-957a-49d1-9723-741698027170": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122758726s
Feb 14 00:47:50.551: INFO: Pod "downward-api-472ca341-957a-49d1-9723-741698027170": Phase="Pending", Reason="", readiness=false. Elapsed: 8.139652624s
Feb 14 00:47:52.565: INFO: Pod "downward-api-472ca341-957a-49d1-9723-741698027170": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.152853111s
STEP: Saw pod success
Feb 14 00:47:52.565: INFO: Pod "downward-api-472ca341-957a-49d1-9723-741698027170" satisfied condition "success or failure"
Feb 14 00:47:52.576: INFO: Trying to get logs from node jerma-node pod downward-api-472ca341-957a-49d1-9723-741698027170 container dapi-container: 
STEP: delete the pod
Feb 14 00:47:52.750: INFO: Waiting for pod downward-api-472ca341-957a-49d1-9723-741698027170 to disappear
Feb 14 00:47:52.752: INFO: Pod downward-api-472ca341-957a-49d1-9723-741698027170 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:47:52.753: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8225" for this suite.

• [SLOW TEST:10.479 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":280,"completed":131,"skipped":2020,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:47:52.765: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:47:52.980: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-9374" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":280,"completed":132,"skipped":2038,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:47:53.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 14 00:47:53.119: INFO: Waiting up to 5m0s for pod "downward-api-f58efea3-2a3e-4c94-9616-7d4e5264ca83" in namespace "downward-api-7472" to be "success or failure"
Feb 14 00:47:53.192: INFO: Pod "downward-api-f58efea3-2a3e-4c94-9616-7d4e5264ca83": Phase="Pending", Reason="", readiness=false. Elapsed: 72.117052ms
Feb 14 00:47:55.199: INFO: Pod "downward-api-f58efea3-2a3e-4c94-9616-7d4e5264ca83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080050648s
Feb 14 00:47:57.208: INFO: Pod "downward-api-f58efea3-2a3e-4c94-9616-7d4e5264ca83": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088892875s
Feb 14 00:47:59.218: INFO: Pod "downward-api-f58efea3-2a3e-4c94-9616-7d4e5264ca83": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098103501s
Feb 14 00:48:01.236: INFO: Pod "downward-api-f58efea3-2a3e-4c94-9616-7d4e5264ca83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.116714957s
STEP: Saw pod success
Feb 14 00:48:01.236: INFO: Pod "downward-api-f58efea3-2a3e-4c94-9616-7d4e5264ca83" satisfied condition "success or failure"
Feb 14 00:48:01.250: INFO: Trying to get logs from node jerma-node pod downward-api-f58efea3-2a3e-4c94-9616-7d4e5264ca83 container dapi-container: 
STEP: delete the pod
Feb 14 00:48:01.316: INFO: Waiting for pod downward-api-f58efea3-2a3e-4c94-9616-7d4e5264ca83 to disappear
Feb 14 00:48:02.502: INFO: Pod downward-api-f58efea3-2a3e-4c94-9616-7d4e5264ca83 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:48:02.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7472" for this suite.

• [SLOW TEST:9.483 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":280,"completed":133,"skipped":2064,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:48:02.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:48:10.951: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-8281" for this suite.

• [SLOW TEST:8.443 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":280,"completed":134,"skipped":2118,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:48:10.976: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 14 00:48:11.085: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7507add7-84dd-4ac5-a999-d71d8275c50d" in namespace "downward-api-9933" to be "success or failure"
Feb 14 00:48:11.106: INFO: Pod "downwardapi-volume-7507add7-84dd-4ac5-a999-d71d8275c50d": Phase="Pending", Reason="", readiness=false. Elapsed: 21.159479ms
Feb 14 00:48:13.115: INFO: Pod "downwardapi-volume-7507add7-84dd-4ac5-a999-d71d8275c50d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02940806s
Feb 14 00:48:15.153: INFO: Pod "downwardapi-volume-7507add7-84dd-4ac5-a999-d71d8275c50d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068157558s
Feb 14 00:48:17.164: INFO: Pod "downwardapi-volume-7507add7-84dd-4ac5-a999-d71d8275c50d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078322977s
Feb 14 00:48:19.173: INFO: Pod "downwardapi-volume-7507add7-84dd-4ac5-a999-d71d8275c50d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.088025338s
Feb 14 00:48:21.217: INFO: Pod "downwardapi-volume-7507add7-84dd-4ac5-a999-d71d8275c50d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.131874447s
STEP: Saw pod success
Feb 14 00:48:21.217: INFO: Pod "downwardapi-volume-7507add7-84dd-4ac5-a999-d71d8275c50d" satisfied condition "success or failure"
Feb 14 00:48:21.222: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7507add7-84dd-4ac5-a999-d71d8275c50d container client-container: 
STEP: delete the pod
Feb 14 00:48:21.283: INFO: Waiting for pod downwardapi-volume-7507add7-84dd-4ac5-a999-d71d8275c50d to disappear
Feb 14 00:48:21.297: INFO: Pod downwardapi-volume-7507add7-84dd-4ac5-a999-d71d8275c50d no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:48:21.297: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9933" for this suite.

• [SLOW TEST:10.337 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":135,"skipped":2118,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:48:21.314: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-c8f87ba8-a046-4638-bebd-84568fa64d27
STEP: Creating a pod to test consume configMaps
Feb 14 00:48:21.530: INFO: Waiting up to 5m0s for pod "pod-configmaps-66995840-dbc5-465c-b979-69c8c3285adc" in namespace "configmap-6381" to be "success or failure"
Feb 14 00:48:21.558: INFO: Pod "pod-configmaps-66995840-dbc5-465c-b979-69c8c3285adc": Phase="Pending", Reason="", readiness=false. Elapsed: 28.462902ms
Feb 14 00:48:23.589: INFO: Pod "pod-configmaps-66995840-dbc5-465c-b979-69c8c3285adc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058827151s
Feb 14 00:48:25.596: INFO: Pod "pod-configmaps-66995840-dbc5-465c-b979-69c8c3285adc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.066366331s
Feb 14 00:48:27.606: INFO: Pod "pod-configmaps-66995840-dbc5-465c-b979-69c8c3285adc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076446496s
Feb 14 00:48:29.615: INFO: Pod "pod-configmaps-66995840-dbc5-465c-b979-69c8c3285adc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.084811277s
STEP: Saw pod success
Feb 14 00:48:29.615: INFO: Pod "pod-configmaps-66995840-dbc5-465c-b979-69c8c3285adc" satisfied condition "success or failure"
Feb 14 00:48:29.640: INFO: Trying to get logs from node jerma-node pod pod-configmaps-66995840-dbc5-465c-b979-69c8c3285adc container configmap-volume-test: 
STEP: delete the pod
Feb 14 00:48:29.678: INFO: Waiting for pod pod-configmaps-66995840-dbc5-465c-b979-69c8c3285adc to disappear
Feb 14 00:48:29.685: INFO: Pod pod-configmaps-66995840-dbc5-465c-b979-69c8c3285adc no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:48:29.685: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6381" for this suite.

• [SLOW TEST:8.408 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":136,"skipped":2127,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:48:29.724: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:48:40.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8963" for this suite.

• [SLOW TEST:11.246 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":280,"completed":137,"skipped":2132,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:48:40.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-4317, will wait for the garbage collector to delete the pods
Feb 14 00:48:55.141: INFO: Deleting Job.batch foo took: 15.468179ms
Feb 14 00:48:55.442: INFO: Terminating Job.batch foo pods took: 301.236315ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:49:42.468: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-4317" for this suite.

• [SLOW TEST:61.516 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":280,"completed":138,"skipped":2132,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:49:42.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 14 00:49:58.762: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 00:49:58.770: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 00:50:00.771: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 00:50:00.778: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 00:50:02.771: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 00:50:02.779: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 00:50:04.771: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 00:50:04.779: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 00:50:06.771: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 00:50:06.778: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 00:50:08.771: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 00:50:08.780: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 00:50:10.771: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 00:50:10.783: INFO: Pod pod-with-prestop-exec-hook still exists
Feb 14 00:50:12.771: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Feb 14 00:50:12.805: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:50:12.882: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-1610" for this suite.

• [SLOW TEST:30.413 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":280,"completed":139,"skipped":2148,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:50:12.902: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating api versions
Feb 14 00:50:12.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Feb 14 00:50:13.235: INFO: stderr: ""
Feb 14 00:50:13.235: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:50:13.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9976" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":280,"completed":140,"skipped":2178,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:50:13.254: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 14 00:50:13.374: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e68cac05-ebfe-470d-a57f-b31191fbd356" in namespace "projected-947" to be "success or failure"
Feb 14 00:50:13.474: INFO: Pod "downwardapi-volume-e68cac05-ebfe-470d-a57f-b31191fbd356": Phase="Pending", Reason="", readiness=false. Elapsed: 100.490706ms
Feb 14 00:50:15.481: INFO: Pod "downwardapi-volume-e68cac05-ebfe-470d-a57f-b31191fbd356": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107229922s
Feb 14 00:50:17.493: INFO: Pod "downwardapi-volume-e68cac05-ebfe-470d-a57f-b31191fbd356": Phase="Pending", Reason="", readiness=false. Elapsed: 4.118832116s
Feb 14 00:50:19.507: INFO: Pod "downwardapi-volume-e68cac05-ebfe-470d-a57f-b31191fbd356": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132695676s
Feb 14 00:50:21.514: INFO: Pod "downwardapi-volume-e68cac05-ebfe-470d-a57f-b31191fbd356": Phase="Pending", Reason="", readiness=false. Elapsed: 8.13992839s
Feb 14 00:50:23.542: INFO: Pod "downwardapi-volume-e68cac05-ebfe-470d-a57f-b31191fbd356": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.168276461s
STEP: Saw pod success
Feb 14 00:50:23.542: INFO: Pod "downwardapi-volume-e68cac05-ebfe-470d-a57f-b31191fbd356" satisfied condition "success or failure"
Feb 14 00:50:23.560: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e68cac05-ebfe-470d-a57f-b31191fbd356 container client-container: 
STEP: delete the pod
Feb 14 00:50:23.627: INFO: Waiting for pod downwardapi-volume-e68cac05-ebfe-470d-a57f-b31191fbd356 to disappear
Feb 14 00:50:23.734: INFO: Pod downwardapi-volume-e68cac05-ebfe-470d-a57f-b31191fbd356 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:50:23.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-947" for this suite.

• [SLOW TEST:10.496 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":280,"completed":141,"skipped":2202,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:50:23.751: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-0fb4e74a-3416-43c7-8846-e7a1f30f8247
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-0fb4e74a-3416-43c7-8846-e7a1f30f8247
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:51:41.026: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8470" for this suite.

• [SLOW TEST:77.307 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":142,"skipped":2219,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:51:41.059: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 14 00:51:41.194: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 14 00:51:41.214: INFO: Waiting for terminating namespaces to be deleted...
Feb 14 00:51:41.219: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 14 00:51:41.231: INFO: pod-configmaps-ccb6e157-c78d-4ab8-b2ef-b0af14808bc3 from configmap-8470 started at 2020-02-14 00:50:25 +0000 UTC (1 container statuses recorded)
Feb 14 00:51:41.231: INFO: 	Container configmap-volume-test ready: true, restart count 0
Feb 14 00:51:41.231: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 14 00:51:41.231: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 00:51:41.231: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 14 00:51:41.231: INFO: 	Container weave ready: true, restart count 1
Feb 14 00:51:41.231: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 00:51:41.231: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 14 00:51:41.356: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 14 00:51:41.356: INFO: 	Container coredns ready: true, restart count 0
Feb 14 00:51:41.356: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 14 00:51:41.356: INFO: 	Container coredns ready: true, restart count 0
Feb 14 00:51:41.356: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 14 00:51:41.356: INFO: 	Container kube-controller-manager ready: true, restart count 7
Feb 14 00:51:41.356: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 14 00:51:41.356: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 00:51:41.356: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 14 00:51:41.356: INFO: 	Container weave ready: true, restart count 0
Feb 14 00:51:41.356: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 00:51:41.356: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 14 00:51:41.356: INFO: 	Container kube-scheduler ready: true, restart count 11
Feb 14 00:51:41.356: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 14 00:51:41.356: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 14 00:51:41.356: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 14 00:51:41.356: INFO: 	Container etcd ready: true, restart count 1
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-b2f929af-f0b2-40d2-8b62-805c517dedf9 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-b2f929af-f0b2-40d2-8b62-805c517dedf9 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-b2f929af-f0b2-40d2-8b62-805c517dedf9
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:52:01.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7952" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:20.683 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":280,"completed":143,"skipped":2232,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:52:01.743: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:52:06.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-851" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":280,"completed":144,"skipped":2245,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:52:06.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Feb 14 00:52:26.875: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9962 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 00:52:26.875: INFO: >>> kubeConfig: /root/.kube/config
I0214 00:52:26.933612       9 log.go:172] (0xc002edc2c0) (0xc002ed2280) Create stream
I0214 00:52:26.933738       9 log.go:172] (0xc002edc2c0) (0xc002ed2280) Stream added, broadcasting: 1
I0214 00:52:26.938608       9 log.go:172] (0xc002edc2c0) Reply frame received for 1
I0214 00:52:26.938709       9 log.go:172] (0xc002edc2c0) (0xc002973720) Create stream
I0214 00:52:26.938734       9 log.go:172] (0xc002edc2c0) (0xc002973720) Stream added, broadcasting: 3
I0214 00:52:26.941881       9 log.go:172] (0xc002edc2c0) Reply frame received for 3
I0214 00:52:26.942065       9 log.go:172] (0xc002edc2c0) (0xc002b10a00) Create stream
I0214 00:52:26.942097       9 log.go:172] (0xc002edc2c0) (0xc002b10a00) Stream added, broadcasting: 5
I0214 00:52:26.944544       9 log.go:172] (0xc002edc2c0) Reply frame received for 5
I0214 00:52:27.035277       9 log.go:172] (0xc002edc2c0) Data frame received for 3
I0214 00:52:27.035484       9 log.go:172] (0xc002973720) (3) Data frame handling
I0214 00:52:27.035518       9 log.go:172] (0xc002973720) (3) Data frame sent
I0214 00:52:27.117807       9 log.go:172] (0xc002edc2c0) Data frame received for 1
I0214 00:52:27.118204       9 log.go:172] (0xc002edc2c0) (0xc002973720) Stream removed, broadcasting: 3
I0214 00:52:27.118669       9 log.go:172] (0xc002ed2280) (1) Data frame handling
I0214 00:52:27.118720       9 log.go:172] (0xc002ed2280) (1) Data frame sent
I0214 00:52:27.118744       9 log.go:172] (0xc002edc2c0) (0xc002b10a00) Stream removed, broadcasting: 5
I0214 00:52:27.118862       9 log.go:172] (0xc002edc2c0) (0xc002ed2280) Stream removed, broadcasting: 1
I0214 00:52:27.118904       9 log.go:172] (0xc002edc2c0) Go away received
I0214 00:52:27.119524       9 log.go:172] (0xc002edc2c0) (0xc002ed2280) Stream removed, broadcasting: 1
I0214 00:52:27.119563       9 log.go:172] (0xc002edc2c0) (0xc002973720) Stream removed, broadcasting: 3
I0214 00:52:27.119586       9 log.go:172] (0xc002edc2c0) (0xc002b10a00) Stream removed, broadcasting: 5
Feb 14 00:52:27.119: INFO: Exec stderr: ""
Feb 14 00:52:27.119: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9962 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 00:52:27.120: INFO: >>> kubeConfig: /root/.kube/config
I0214 00:52:27.177745       9 log.go:172] (0xc002d9e000) (0xc002b10be0) Create stream
I0214 00:52:27.177869       9 log.go:172] (0xc002d9e000) (0xc002b10be0) Stream added, broadcasting: 1
I0214 00:52:27.184304       9 log.go:172] (0xc002d9e000) Reply frame received for 1
I0214 00:52:27.184502       9 log.go:172] (0xc002d9e000) (0xc0029737c0) Create stream
I0214 00:52:27.184545       9 log.go:172] (0xc002d9e000) (0xc0029737c0) Stream added, broadcasting: 3
I0214 00:52:27.188646       9 log.go:172] (0xc002d9e000) Reply frame received for 3
I0214 00:52:27.188694       9 log.go:172] (0xc002d9e000) (0xc002b10d20) Create stream
I0214 00:52:27.188709       9 log.go:172] (0xc002d9e000) (0xc002b10d20) Stream added, broadcasting: 5
I0214 00:52:27.189996       9 log.go:172] (0xc002d9e000) Reply frame received for 5
I0214 00:52:27.258355       9 log.go:172] (0xc002d9e000) Data frame received for 3
I0214 00:52:27.258496       9 log.go:172] (0xc0029737c0) (3) Data frame handling
I0214 00:52:27.258616       9 log.go:172] (0xc0029737c0) (3) Data frame sent
I0214 00:52:27.336747       9 log.go:172] (0xc002d9e000) Data frame received for 1
I0214 00:52:27.336973       9 log.go:172] (0xc002d9e000) (0xc0029737c0) Stream removed, broadcasting: 3
I0214 00:52:27.337648       9 log.go:172] (0xc002b10be0) (1) Data frame handling
I0214 00:52:27.337829       9 log.go:172] (0xc002b10be0) (1) Data frame sent
I0214 00:52:27.337964       9 log.go:172] (0xc002d9e000) (0xc002b10d20) Stream removed, broadcasting: 5
I0214 00:52:27.338097       9 log.go:172] (0xc002d9e000) (0xc002b10be0) Stream removed, broadcasting: 1
I0214 00:52:27.338534       9 log.go:172] (0xc002d9e000) Go away received
I0214 00:52:27.339261       9 log.go:172] (0xc002d9e000) (0xc002b10be0) Stream removed, broadcasting: 1
I0214 00:52:27.339350       9 log.go:172] (0xc002d9e000) (0xc0029737c0) Stream removed, broadcasting: 3
I0214 00:52:27.339391       9 log.go:172] (0xc002d9e000) (0xc002b10d20) Stream removed, broadcasting: 5
Feb 14 00:52:27.339: INFO: Exec stderr: ""
Feb 14 00:52:27.339: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9962 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 00:52:27.339: INFO: >>> kubeConfig: /root/.kube/config
I0214 00:52:27.404746       9 log.go:172] (0xc002edc9a0) (0xc002ed2460) Create stream
I0214 00:52:27.404870       9 log.go:172] (0xc002edc9a0) (0xc002ed2460) Stream added, broadcasting: 1
I0214 00:52:27.411470       9 log.go:172] (0xc002edc9a0) Reply frame received for 1
I0214 00:52:27.411576       9 log.go:172] (0xc002edc9a0) (0xc002b10dc0) Create stream
I0214 00:52:27.411599       9 log.go:172] (0xc002edc9a0) (0xc002b10dc0) Stream added, broadcasting: 3
I0214 00:52:27.412910       9 log.go:172] (0xc002edc9a0) Reply frame received for 3
I0214 00:52:27.412940       9 log.go:172] (0xc002edc9a0) (0xc002432e60) Create stream
I0214 00:52:27.412951       9 log.go:172] (0xc002edc9a0) (0xc002432e60) Stream added, broadcasting: 5
I0214 00:52:27.415100       9 log.go:172] (0xc002edc9a0) Reply frame received for 5
I0214 00:52:27.481762       9 log.go:172] (0xc002edc9a0) Data frame received for 3
I0214 00:52:27.481836       9 log.go:172] (0xc002b10dc0) (3) Data frame handling
I0214 00:52:27.481878       9 log.go:172] (0xc002b10dc0) (3) Data frame sent
I0214 00:52:27.553838       9 log.go:172] (0xc002edc9a0) Data frame received for 1
I0214 00:52:27.553954       9 log.go:172] (0xc002ed2460) (1) Data frame handling
I0214 00:52:27.554051       9 log.go:172] (0xc002ed2460) (1) Data frame sent
I0214 00:52:27.554082       9 log.go:172] (0xc002edc9a0) (0xc002ed2460) Stream removed, broadcasting: 1
I0214 00:52:27.554414       9 log.go:172] (0xc002edc9a0) (0xc002432e60) Stream removed, broadcasting: 5
I0214 00:52:27.554465       9 log.go:172] (0xc002edc9a0) (0xc002b10dc0) Stream removed, broadcasting: 3
I0214 00:52:27.554518       9 log.go:172] (0xc002edc9a0) (0xc002ed2460) Stream removed, broadcasting: 1
I0214 00:52:27.554538       9 log.go:172] (0xc002edc9a0) (0xc002b10dc0) Stream removed, broadcasting: 3
I0214 00:52:27.554596       9 log.go:172] (0xc002edc9a0) (0xc002432e60) Stream removed, broadcasting: 5
I0214 00:52:27.555100       9 log.go:172] (0xc002edc9a0) Go away received
Feb 14 00:52:27.555: INFO: Exec stderr: ""
Feb 14 00:52:27.555: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9962 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 00:52:27.555: INFO: >>> kubeConfig: /root/.kube/config
I0214 00:52:27.601073       9 log.go:172] (0xc002b85970) (0xc002433040) Create stream
I0214 00:52:27.601219       9 log.go:172] (0xc002b85970) (0xc002433040) Stream added, broadcasting: 1
I0214 00:52:27.605697       9 log.go:172] (0xc002b85970) Reply frame received for 1
I0214 00:52:27.605735       9 log.go:172] (0xc002b85970) (0xc002ed2500) Create stream
I0214 00:52:27.605746       9 log.go:172] (0xc002b85970) (0xc002ed2500) Stream added, broadcasting: 3
I0214 00:52:27.606698       9 log.go:172] (0xc002b85970) Reply frame received for 3
I0214 00:52:27.606724       9 log.go:172] (0xc002b85970) (0xc002973900) Create stream
I0214 00:52:27.606734       9 log.go:172] (0xc002b85970) (0xc002973900) Stream added, broadcasting: 5
I0214 00:52:27.608557       9 log.go:172] (0xc002b85970) Reply frame received for 5
I0214 00:52:27.656787       9 log.go:172] (0xc002b85970) Data frame received for 3
I0214 00:52:27.656856       9 log.go:172] (0xc002ed2500) (3) Data frame handling
I0214 00:52:27.656887       9 log.go:172] (0xc002ed2500) (3) Data frame sent
I0214 00:52:27.724109       9 log.go:172] (0xc002b85970) Data frame received for 1
I0214 00:52:27.724168       9 log.go:172] (0xc002433040) (1) Data frame handling
I0214 00:52:27.724218       9 log.go:172] (0xc002433040) (1) Data frame sent
I0214 00:52:27.724247       9 log.go:172] (0xc002b85970) (0xc002433040) Stream removed, broadcasting: 1
I0214 00:52:27.724577       9 log.go:172] (0xc002b85970) (0xc002ed2500) Stream removed, broadcasting: 3
I0214 00:52:27.724674       9 log.go:172] (0xc002b85970) (0xc002973900) Stream removed, broadcasting: 5
I0214 00:52:27.724791       9 log.go:172] (0xc002b85970) Go away received
I0214 00:52:27.724869       9 log.go:172] (0xc002b85970) (0xc002433040) Stream removed, broadcasting: 1
I0214 00:52:27.724934       9 log.go:172] (0xc002b85970) (0xc002ed2500) Stream removed, broadcasting: 3
I0214 00:52:27.724960       9 log.go:172] (0xc002b85970) (0xc002973900) Stream removed, broadcasting: 5
Feb 14 00:52:27.724: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Feb 14 00:52:27.725: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9962 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 00:52:27.725: INFO: >>> kubeConfig: /root/.kube/config
I0214 00:52:27.764438       9 log.go:172] (0xc002eba2c0) (0xc001f0a280) Create stream
I0214 00:52:27.764524       9 log.go:172] (0xc002eba2c0) (0xc001f0a280) Stream added, broadcasting: 1
I0214 00:52:27.773309       9 log.go:172] (0xc002eba2c0) Reply frame received for 1
I0214 00:52:27.773342       9 log.go:172] (0xc002eba2c0) (0xc0024330e0) Create stream
I0214 00:52:27.773353       9 log.go:172] (0xc002eba2c0) (0xc0024330e0) Stream added, broadcasting: 3
I0214 00:52:27.775891       9 log.go:172] (0xc002eba2c0) Reply frame received for 3
I0214 00:52:27.775923       9 log.go:172] (0xc002eba2c0) (0xc002b10e60) Create stream
I0214 00:52:27.775935       9 log.go:172] (0xc002eba2c0) (0xc002b10e60) Stream added, broadcasting: 5
I0214 00:52:27.777342       9 log.go:172] (0xc002eba2c0) Reply frame received for 5
I0214 00:52:27.880941       9 log.go:172] (0xc002eba2c0) Data frame received for 3
I0214 00:52:27.881119       9 log.go:172] (0xc0024330e0) (3) Data frame handling
I0214 00:52:27.881173       9 log.go:172] (0xc0024330e0) (3) Data frame sent
I0214 00:52:27.992288       9 log.go:172] (0xc002eba2c0) (0xc002b10e60) Stream removed, broadcasting: 5
I0214 00:52:27.992582       9 log.go:172] (0xc002eba2c0) Data frame received for 1
I0214 00:52:27.992644       9 log.go:172] (0xc002eba2c0) (0xc0024330e0) Stream removed, broadcasting: 3
I0214 00:52:27.992714       9 log.go:172] (0xc001f0a280) (1) Data frame handling
I0214 00:52:27.992752       9 log.go:172] (0xc001f0a280) (1) Data frame sent
I0214 00:52:27.992772       9 log.go:172] (0xc002eba2c0) (0xc001f0a280) Stream removed, broadcasting: 1
I0214 00:52:27.992798       9 log.go:172] (0xc002eba2c0) Go away received
I0214 00:52:27.993414       9 log.go:172] (0xc002eba2c0) (0xc001f0a280) Stream removed, broadcasting: 1
I0214 00:52:27.993490       9 log.go:172] (0xc002eba2c0) (0xc0024330e0) Stream removed, broadcasting: 3
I0214 00:52:27.993522       9 log.go:172] (0xc002eba2c0) (0xc002b10e60) Stream removed, broadcasting: 5
Feb 14 00:52:27.993: INFO: Exec stderr: ""
Feb 14 00:52:27.993: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9962 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 00:52:27.993: INFO: >>> kubeConfig: /root/.kube/config
I0214 00:52:28.029801       9 log.go:172] (0xc0030e4370) (0xc002973b80) Create stream
I0214 00:52:28.029869       9 log.go:172] (0xc0030e4370) (0xc002973b80) Stream added, broadcasting: 1
I0214 00:52:28.033612       9 log.go:172] (0xc0030e4370) Reply frame received for 1
I0214 00:52:28.033652       9 log.go:172] (0xc0030e4370) (0xc002433180) Create stream
I0214 00:52:28.033666       9 log.go:172] (0xc0030e4370) (0xc002433180) Stream added, broadcasting: 3
I0214 00:52:28.036638       9 log.go:172] (0xc0030e4370) Reply frame received for 3
I0214 00:52:28.037048       9 log.go:172] (0xc0030e4370) (0xc002ed2640) Create stream
I0214 00:52:28.037071       9 log.go:172] (0xc0030e4370) (0xc002ed2640) Stream added, broadcasting: 5
I0214 00:52:28.039625       9 log.go:172] (0xc0030e4370) Reply frame received for 5
I0214 00:52:28.134043       9 log.go:172] (0xc0030e4370) Data frame received for 3
I0214 00:52:28.134251       9 log.go:172] (0xc002433180) (3) Data frame handling
I0214 00:52:28.134675       9 log.go:172] (0xc002433180) (3) Data frame sent
I0214 00:52:28.213166       9 log.go:172] (0xc0030e4370) Data frame received for 1
I0214 00:52:28.213402       9 log.go:172] (0xc0030e4370) (0xc002433180) Stream removed, broadcasting: 3
I0214 00:52:28.213466       9 log.go:172] (0xc002973b80) (1) Data frame handling
I0214 00:52:28.213511       9 log.go:172] (0xc002973b80) (1) Data frame sent
I0214 00:52:28.213579       9 log.go:172] (0xc0030e4370) (0xc002ed2640) Stream removed, broadcasting: 5
I0214 00:52:28.213741       9 log.go:172] (0xc0030e4370) (0xc002973b80) Stream removed, broadcasting: 1
I0214 00:52:28.213819       9 log.go:172] (0xc0030e4370) Go away received
I0214 00:52:28.214180       9 log.go:172] (0xc0030e4370) (0xc002973b80) Stream removed, broadcasting: 1
I0214 00:52:28.214208       9 log.go:172] (0xc0030e4370) (0xc002433180) Stream removed, broadcasting: 3
I0214 00:52:28.214226       9 log.go:172] (0xc0030e4370) (0xc002ed2640) Stream removed, broadcasting: 5
Feb 14 00:52:28.214: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Feb 14 00:52:28.214: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9962 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 00:52:28.214: INFO: >>> kubeConfig: /root/.kube/config
I0214 00:52:28.253973       9 log.go:172] (0xc0030e49a0) (0xc002973d60) Create stream
I0214 00:52:28.254067       9 log.go:172] (0xc0030e49a0) (0xc002973d60) Stream added, broadcasting: 1
I0214 00:52:28.258424       9 log.go:172] (0xc0030e49a0) Reply frame received for 1
I0214 00:52:28.258467       9 log.go:172] (0xc0030e49a0) (0xc002ed26e0) Create stream
I0214 00:52:28.258483       9 log.go:172] (0xc0030e49a0) (0xc002ed26e0) Stream added, broadcasting: 3
I0214 00:52:28.260164       9 log.go:172] (0xc0030e49a0) Reply frame received for 3
I0214 00:52:28.260224       9 log.go:172] (0xc0030e49a0) (0xc002433220) Create stream
I0214 00:52:28.260238       9 log.go:172] (0xc0030e49a0) (0xc002433220) Stream added, broadcasting: 5
I0214 00:52:28.262155       9 log.go:172] (0xc0030e49a0) Reply frame received for 5
I0214 00:52:28.340583       9 log.go:172] (0xc0030e49a0) Data frame received for 3
I0214 00:52:28.340730       9 log.go:172] (0xc002ed26e0) (3) Data frame handling
I0214 00:52:28.340852       9 log.go:172] (0xc002ed26e0) (3) Data frame sent
I0214 00:52:28.443412       9 log.go:172] (0xc0030e49a0) Data frame received for 1
I0214 00:52:28.443606       9 log.go:172] (0xc002973d60) (1) Data frame handling
I0214 00:52:28.443832       9 log.go:172] (0xc002973d60) (1) Data frame sent
I0214 00:52:28.443899       9 log.go:172] (0xc0030e49a0) (0xc002973d60) Stream removed, broadcasting: 1
I0214 00:52:28.445029       9 log.go:172] (0xc0030e49a0) (0xc002ed26e0) Stream removed, broadcasting: 3
I0214 00:52:28.445128       9 log.go:172] (0xc0030e49a0) (0xc002433220) Stream removed, broadcasting: 5
I0214 00:52:28.445251       9 log.go:172] (0xc0030e49a0) Go away received
I0214 00:52:28.445505       9 log.go:172] (0xc0030e49a0) (0xc002973d60) Stream removed, broadcasting: 1
I0214 00:52:28.445985       9 log.go:172] (0xc0030e49a0) (0xc002ed26e0) Stream removed, broadcasting: 3
I0214 00:52:28.446032       9 log.go:172] (0xc0030e49a0) (0xc002433220) Stream removed, broadcasting: 5
Feb 14 00:52:28.446: INFO: Exec stderr: ""
Feb 14 00:52:28.449: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9962 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 00:52:28.449: INFO: >>> kubeConfig: /root/.kube/config
I0214 00:52:28.522320       9 log.go:172] (0xc002eba840) (0xc001f0a6e0) Create stream
I0214 00:52:28.522723       9 log.go:172] (0xc002eba840) (0xc001f0a6e0) Stream added, broadcasting: 1
I0214 00:52:28.529507       9 log.go:172] (0xc002eba840) Reply frame received for 1
I0214 00:52:28.529608       9 log.go:172] (0xc002eba840) (0xc002973e00) Create stream
I0214 00:52:28.529621       9 log.go:172] (0xc002eba840) (0xc002973e00) Stream added, broadcasting: 3
I0214 00:52:28.533282       9 log.go:172] (0xc002eba840) Reply frame received for 3
I0214 00:52:28.533306       9 log.go:172] (0xc002eba840) (0xc001f0a820) Create stream
I0214 00:52:28.533317       9 log.go:172] (0xc002eba840) (0xc001f0a820) Stream added, broadcasting: 5
I0214 00:52:28.534498       9 log.go:172] (0xc002eba840) Reply frame received for 5
I0214 00:52:28.661515       9 log.go:172] (0xc002eba840) Data frame received for 3
I0214 00:52:28.661703       9 log.go:172] (0xc002973e00) (3) Data frame handling
I0214 00:52:28.661761       9 log.go:172] (0xc002973e00) (3) Data frame sent
I0214 00:52:28.740680       9 log.go:172] (0xc002eba840) Data frame received for 1
I0214 00:52:28.740724       9 log.go:172] (0xc001f0a6e0) (1) Data frame handling
I0214 00:52:28.740738       9 log.go:172] (0xc001f0a6e0) (1) Data frame sent
I0214 00:52:28.740752       9 log.go:172] (0xc002eba840) (0xc001f0a6e0) Stream removed, broadcasting: 1
I0214 00:52:28.741165       9 log.go:172] (0xc002eba840) (0xc002973e00) Stream removed, broadcasting: 3
I0214 00:52:28.741463       9 log.go:172] (0xc002eba840) (0xc001f0a820) Stream removed, broadcasting: 5
I0214 00:52:28.741512       9 log.go:172] (0xc002eba840) (0xc001f0a6e0) Stream removed, broadcasting: 1
I0214 00:52:28.741522       9 log.go:172] (0xc002eba840) (0xc002973e00) Stream removed, broadcasting: 3
I0214 00:52:28.741547       9 log.go:172] (0xc002eba840) (0xc001f0a820) Stream removed, broadcasting: 5
Feb 14 00:52:28.741: INFO: Exec stderr: ""
Feb 14 00:52:28.741: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-9962 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 00:52:28.742: INFO: >>> kubeConfig: /root/.kube/config
I0214 00:52:28.785048       9 log.go:172] (0xc0032ca000) (0xc002433400) Create stream
I0214 00:52:28.785331       9 log.go:172] (0xc0032ca000) (0xc002433400) Stream added, broadcasting: 1
I0214 00:52:28.788698       9 log.go:172] (0xc0032ca000) Reply frame received for 1
I0214 00:52:28.788912       9 log.go:172] (0xc0032ca000) (0xc002433540) Create stream
I0214 00:52:28.788953       9 log.go:172] (0xc0032ca000) (0xc002433540) Stream added, broadcasting: 3
I0214 00:52:28.791479       9 log.go:172] (0xc0032ca000) Reply frame received for 3
I0214 00:52:28.791514       9 log.go:172] (0xc0032ca000) (0xc002433680) Create stream
I0214 00:52:28.791526       9 log.go:172] (0xc0032ca000) (0xc002433680) Stream added, broadcasting: 5
I0214 00:52:28.794367       9 log.go:172] (0xc0032ca000) Reply frame received for 5
I0214 00:52:28.881021       9 log.go:172] (0xc0032ca000) Data frame received for 3
I0214 00:52:28.881151       9 log.go:172] (0xc002433540) (3) Data frame handling
I0214 00:52:28.881183       9 log.go:172] (0xc002433540) (3) Data frame sent
I0214 00:52:28.958306       9 log.go:172] (0xc0032ca000) Data frame received for 1
I0214 00:52:28.958491       9 log.go:172] (0xc0032ca000) (0xc002433540) Stream removed, broadcasting: 3
I0214 00:52:28.958882       9 log.go:172] (0xc002433400) (1) Data frame handling
I0214 00:52:28.959100       9 log.go:172] (0xc002433400) (1) Data frame sent
I0214 00:52:28.959231       9 log.go:172] (0xc0032ca000) (0xc002433680) Stream removed, broadcasting: 5
I0214 00:52:28.959406       9 log.go:172] (0xc0032ca000) (0xc002433400) Stream removed, broadcasting: 1
I0214 00:52:28.959446       9 log.go:172] (0xc0032ca000) Go away received
I0214 00:52:28.959877       9 log.go:172] (0xc0032ca000) (0xc002433400) Stream removed, broadcasting: 1
I0214 00:52:28.959968       9 log.go:172] (0xc0032ca000) (0xc002433540) Stream removed, broadcasting: 3
I0214 00:52:28.960060       9 log.go:172] (0xc0032ca000) (0xc002433680) Stream removed, broadcasting: 5
Feb 14 00:52:28.960: INFO: Exec stderr: ""
Feb 14 00:52:28.960: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-9962 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 00:52:28.960: INFO: >>> kubeConfig: /root/.kube/config
I0214 00:52:29.000001       9 log.go:172] (0xc002edd290) (0xc002ed2960) Create stream
I0214 00:52:29.000085       9 log.go:172] (0xc002edd290) (0xc002ed2960) Stream added, broadcasting: 1
I0214 00:52:29.004031       9 log.go:172] (0xc002edd290) Reply frame received for 1
I0214 00:52:29.004092       9 log.go:172] (0xc002edd290) (0xc001f0a960) Create stream
I0214 00:52:29.004112       9 log.go:172] (0xc002edd290) (0xc001f0a960) Stream added, broadcasting: 3
I0214 00:52:29.005672       9 log.go:172] (0xc002edd290) Reply frame received for 3
I0214 00:52:29.005765       9 log.go:172] (0xc002edd290) (0xc002433720) Create stream
I0214 00:52:29.005839       9 log.go:172] (0xc002edd290) (0xc002433720) Stream added, broadcasting: 5
I0214 00:52:29.007301       9 log.go:172] (0xc002edd290) Reply frame received for 5
I0214 00:52:29.073005       9 log.go:172] (0xc002edd290) Data frame received for 3
I0214 00:52:29.073256       9 log.go:172] (0xc001f0a960) (3) Data frame handling
I0214 00:52:29.073298       9 log.go:172] (0xc001f0a960) (3) Data frame sent
I0214 00:52:29.147517       9 log.go:172] (0xc002edd290) Data frame received for 1
I0214 00:52:29.147742       9 log.go:172] (0xc002edd290) (0xc002433720) Stream removed, broadcasting: 5
I0214 00:52:29.147838       9 log.go:172] (0xc002ed2960) (1) Data frame handling
I0214 00:52:29.147869       9 log.go:172] (0xc002ed2960) (1) Data frame sent
I0214 00:52:29.148120       9 log.go:172] (0xc002edd290) (0xc001f0a960) Stream removed, broadcasting: 3
I0214 00:52:29.148162       9 log.go:172] (0xc002edd290) (0xc002ed2960) Stream removed, broadcasting: 1
I0214 00:52:29.148174       9 log.go:172] (0xc002edd290) Go away received
I0214 00:52:29.148757       9 log.go:172] (0xc002edd290) (0xc002ed2960) Stream removed, broadcasting: 1
I0214 00:52:29.148772       9 log.go:172] (0xc002edd290) (0xc001f0a960) Stream removed, broadcasting: 3
I0214 00:52:29.148776       9 log.go:172] (0xc002edd290) (0xc002433720) Stream removed, broadcasting: 5
Feb 14 00:52:29.148: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:52:29.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-9962" for this suite.

• [SLOW TEST:22.604 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":145,"skipped":2256,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:52:29.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 00:52:29.311: INFO: The status of Pod test-webserver-cea0f5b6-a869-4638-be81-ea1aa74d7cca is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:52:31.419: INFO: The status of Pod test-webserver-cea0f5b6-a869-4638-be81-ea1aa74d7cca is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:52:33.382: INFO: The status of Pod test-webserver-cea0f5b6-a869-4638-be81-ea1aa74d7cca is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:52:35.392: INFO: The status of Pod test-webserver-cea0f5b6-a869-4638-be81-ea1aa74d7cca is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:52:37.321: INFO: The status of Pod test-webserver-cea0f5b6-a869-4638-be81-ea1aa74d7cca is Running (Ready = false)
Feb 14 00:52:39.326: INFO: The status of Pod test-webserver-cea0f5b6-a869-4638-be81-ea1aa74d7cca is Running (Ready = false)
Feb 14 00:52:41.319: INFO: The status of Pod test-webserver-cea0f5b6-a869-4638-be81-ea1aa74d7cca is Running (Ready = false)
Feb 14 00:52:43.320: INFO: The status of Pod test-webserver-cea0f5b6-a869-4638-be81-ea1aa74d7cca is Running (Ready = false)
Feb 14 00:52:45.321: INFO: The status of Pod test-webserver-cea0f5b6-a869-4638-be81-ea1aa74d7cca is Running (Ready = false)
Feb 14 00:52:47.320: INFO: The status of Pod test-webserver-cea0f5b6-a869-4638-be81-ea1aa74d7cca is Running (Ready = false)
Feb 14 00:52:49.320: INFO: The status of Pod test-webserver-cea0f5b6-a869-4638-be81-ea1aa74d7cca is Running (Ready = false)
Feb 14 00:52:51.318: INFO: The status of Pod test-webserver-cea0f5b6-a869-4638-be81-ea1aa74d7cca is Running (Ready = true)
Feb 14 00:52:51.325: INFO: Container started at 2020-02-14 00:52:35 +0000 UTC, pod became ready at 2020-02-14 00:52:50 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:52:51.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6993" for this suite.

• [SLOW TEST:22.174 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":280,"completed":146,"skipped":2272,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:52:51.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 14 00:52:52.116: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 14 00:52:54.132: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:52:56.142: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:52:58.605: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:53:00.137: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:53:03.938: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:53:04.241: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238372, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 14 00:53:07.200: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 00:53:07.208: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:53:08.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2400" for this suite.
STEP: Destroying namespace "webhook-2400-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:17.767 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":280,"completed":147,"skipped":2284,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:53:09.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Feb 14 00:53:09.210: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:53:26.063: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1803" for this suite.

• [SLOW TEST:17.024 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":280,"completed":148,"skipped":2299,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:53:26.136: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 00:53:26.224: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 19.362421ms)
Feb 14 00:53:26.227: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.650421ms)
Feb 14 00:53:26.231: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.109129ms)
Feb 14 00:53:26.234: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.345162ms)
Feb 14 00:53:26.262: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 28.398073ms)
Feb 14 00:53:26.272: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.338256ms)
Feb 14 00:53:26.291: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 18.839175ms)
Feb 14 00:53:26.303: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 12.025619ms)
Feb 14 00:53:26.326: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 23.103903ms)
Feb 14 00:53:26.337: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 10.187121ms)
Feb 14 00:53:26.349: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.92392ms)
Feb 14 00:53:26.354: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.317578ms)
Feb 14 00:53:26.393: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 39.207232ms)
Feb 14 00:53:26.399: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.734611ms)
Feb 14 00:53:26.404: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.014655ms)
Feb 14 00:53:26.409: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.010214ms)
Feb 14 00:53:26.414: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.414785ms)
Feb 14 00:53:26.419: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.732024ms)
Feb 14 00:53:26.422: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.416575ms)
Feb 14 00:53:26.426: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.712324ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:53:26.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1751" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":280,"completed":149,"skipped":2306,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:53:26.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 00:53:26.615: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:53:27.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7228" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":280,"completed":150,"skipped":2308,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:53:28.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-2b20f362-1328-46b2-9097-af4fd9f334b6
STEP: Creating a pod to test consume configMaps
Feb 14 00:53:28.217: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bcf910a8-97e2-4c69-b9e1-4c9f41cf071f" in namespace "projected-601" to be "success or failure"
Feb 14 00:53:28.246: INFO: Pod "pod-projected-configmaps-bcf910a8-97e2-4c69-b9e1-4c9f41cf071f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.369084ms
Feb 14 00:53:30.258: INFO: Pod "pod-projected-configmaps-bcf910a8-97e2-4c69-b9e1-4c9f41cf071f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040195929s
Feb 14 00:53:32.266: INFO: Pod "pod-projected-configmaps-bcf910a8-97e2-4c69-b9e1-4c9f41cf071f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048798816s
Feb 14 00:53:34.321: INFO: Pod "pod-projected-configmaps-bcf910a8-97e2-4c69-b9e1-4c9f41cf071f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.103118672s
Feb 14 00:53:36.330: INFO: Pod "pod-projected-configmaps-bcf910a8-97e2-4c69-b9e1-4c9f41cf071f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.112165685s
Feb 14 00:53:38.342: INFO: Pod "pod-projected-configmaps-bcf910a8-97e2-4c69-b9e1-4c9f41cf071f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.123938115s
STEP: Saw pod success
Feb 14 00:53:38.342: INFO: Pod "pod-projected-configmaps-bcf910a8-97e2-4c69-b9e1-4c9f41cf071f" satisfied condition "success or failure"
Feb 14 00:53:38.346: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-bcf910a8-97e2-4c69-b9e1-4c9f41cf071f container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 00:53:38.502: INFO: Waiting for pod pod-projected-configmaps-bcf910a8-97e2-4c69-b9e1-4c9f41cf071f to disappear
Feb 14 00:53:38.515: INFO: Pod pod-projected-configmaps-bcf910a8-97e2-4c69-b9e1-4c9f41cf071f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:53:38.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-601" for this suite.

• [SLOW TEST:10.462 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":280,"completed":151,"skipped":2324,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:53:38.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-482f854f-2f88-49ec-beee-7d51b54fc089 in namespace container-probe-54
Feb 14 00:53:47.193: INFO: Started pod busybox-482f854f-2f88-49ec-beee-7d51b54fc089 in namespace container-probe-54
STEP: checking the pod's current state and verifying that restartCount is present
Feb 14 00:53:47.199: INFO: Initial restart count of pod busybox-482f854f-2f88-49ec-beee-7d51b54fc089 is 0
Feb 14 00:54:39.976: INFO: Restart count of pod container-probe-54/busybox-482f854f-2f88-49ec-beee-7d51b54fc089 is now 1 (52.776255207s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:54:39.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-54" for this suite.

• [SLOW TEST:61.480 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":280,"completed":152,"skipped":2345,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:54:40.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 14 00:54:40.182: INFO: Waiting up to 5m0s for pod "pod-b6a841f9-d821-4353-b118-63082466fe1e" in namespace "emptydir-3704" to be "success or failure"
Feb 14 00:54:40.190: INFO: Pod "pod-b6a841f9-d821-4353-b118-63082466fe1e": Phase="Pending", Reason="", readiness=false. Elapsed: 7.894385ms
Feb 14 00:54:42.229: INFO: Pod "pod-b6a841f9-d821-4353-b118-63082466fe1e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046722179s
Feb 14 00:54:44.238: INFO: Pod "pod-b6a841f9-d821-4353-b118-63082466fe1e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055879073s
Feb 14 00:54:46.247: INFO: Pod "pod-b6a841f9-d821-4353-b118-63082466fe1e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065524283s
Feb 14 00:54:48.255: INFO: Pod "pod-b6a841f9-d821-4353-b118-63082466fe1e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073502699s
STEP: Saw pod success
Feb 14 00:54:48.256: INFO: Pod "pod-b6a841f9-d821-4353-b118-63082466fe1e" satisfied condition "success or failure"
Feb 14 00:54:48.259: INFO: Trying to get logs from node jerma-node pod pod-b6a841f9-d821-4353-b118-63082466fe1e container test-container: 
STEP: delete the pod
Feb 14 00:54:48.399: INFO: Waiting for pod pod-b6a841f9-d821-4353-b118-63082466fe1e to disappear
Feb 14 00:54:48.410: INFO: Pod pod-b6a841f9-d821-4353-b118-63082466fe1e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:54:48.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3704" for this suite.

• [SLOW TEST:8.427 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":153,"skipped":2371,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:54:48.443: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:75
Feb 14 00:54:48.581: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the sample API server.
Feb 14 00:54:49.373: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Feb 14 00:54:51.749: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238489, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238489, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238489, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238489, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:54:53.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238489, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238489, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238489, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238489, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:54:55.785: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238489, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238489, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238489, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238489, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:54:57.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238489, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238489, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238489, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238489, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 00:55:00.797: INFO: Waited 1.03136776s for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:66
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:55:01.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-3836" for this suite.

• [SLOW TEST:13.570 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":280,"completed":154,"skipped":2378,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:55:02.015: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 14 00:55:13.026: INFO: Successfully updated pod "pod-update-588dffe9-200e-4aa0-9613-72285a1f4fdd"
STEP: verifying the updated pod is in kubernetes
Feb 14 00:55:13.050: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:55:13.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6012" for this suite.

• [SLOW TEST:11.063 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":280,"completed":155,"skipped":2379,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:55:13.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Feb 14 00:55:14.428: INFO: Pod name wrapped-volume-race-cd024268-3e4e-4448-a314-a1c77482b272: Found 0 pods out of 5
Feb 14 00:55:19.464: INFO: Pod name wrapped-volume-race-cd024268-3e4e-4448-a314-a1c77482b272: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-cd024268-3e4e-4448-a314-a1c77482b272 in namespace emptydir-wrapper-1182, will wait for the garbage collector to delete the pods
Feb 14 00:55:47.624: INFO: Deleting ReplicationController wrapped-volume-race-cd024268-3e4e-4448-a314-a1c77482b272 took: 59.384778ms
Feb 14 00:55:48.025: INFO: Terminating ReplicationController wrapped-volume-race-cd024268-3e4e-4448-a314-a1c77482b272 pods took: 400.952357ms
STEP: Creating RC which spawns configmap-volume pods
Feb 14 00:56:03.565: INFO: Pod name wrapped-volume-race-76db9ec6-dc72-460e-97dc-fcaa356fdfa6: Found 0 pods out of 5
Feb 14 00:56:08.582: INFO: Pod name wrapped-volume-race-76db9ec6-dc72-460e-97dc-fcaa356fdfa6: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-76db9ec6-dc72-460e-97dc-fcaa356fdfa6 in namespace emptydir-wrapper-1182, will wait for the garbage collector to delete the pods
Feb 14 00:56:40.742: INFO: Deleting ReplicationController wrapped-volume-race-76db9ec6-dc72-460e-97dc-fcaa356fdfa6 took: 16.428892ms
Feb 14 00:56:41.144: INFO: Terminating ReplicationController wrapped-volume-race-76db9ec6-dc72-460e-97dc-fcaa356fdfa6 pods took: 401.495039ms
STEP: Creating RC which spawns configmap-volume pods
Feb 14 00:56:53.417: INFO: Pod name wrapped-volume-race-d279e4cd-f94d-45f7-8a65-45a7d84af0a4: Found 0 pods out of 5
Feb 14 00:56:59.310: INFO: Pod name wrapped-volume-race-d279e4cd-f94d-45f7-8a65-45a7d84af0a4: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-d279e4cd-f94d-45f7-8a65-45a7d84af0a4 in namespace emptydir-wrapper-1182, will wait for the garbage collector to delete the pods
Feb 14 00:57:31.460: INFO: Deleting ReplicationController wrapped-volume-race-d279e4cd-f94d-45f7-8a65-45a7d84af0a4 took: 53.339551ms
Feb 14 00:57:31.861: INFO: Terminating ReplicationController wrapped-volume-race-d279e4cd-f94d-45f7-8a65-45a7d84af0a4 pods took: 400.993234ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:57:54.965: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1182" for this suite.

• [SLOW TEST:161.900 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":280,"completed":156,"skipped":2391,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:57:54.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap that has name configmap-test-emptyKey-3fa10f0d-14c5-4d2e-a268-b63a830b5a6b
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:57:55.115: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-423" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":280,"completed":157,"skipped":2412,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:57:55.141: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:58:51.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-4928" for this suite.

• [SLOW TEST:56.509 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":280,"completed":158,"skipped":2419,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:58:51.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 00:59:27.226: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-9958" for this suite.
STEP: Destroying namespace "nsdeletetest-7007" for this suite.
Feb 14 00:59:27.245: INFO: Namespace nsdeletetest-7007 was already deleted
STEP: Destroying namespace "nsdeletetest-7721" for this suite.

• [SLOW TEST:35.596 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":280,"completed":159,"skipped":2446,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 00:59:27.250: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-5781
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 14 00:59:27.301: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 14 00:59:27.414: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:59:29.982: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:59:31.428: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:59:34.043: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:59:35.425: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 00:59:37.422: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 00:59:39.424: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 00:59:41.421: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 00:59:43.420: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 00:59:45.422: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 00:59:47.422: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 00:59:49.422: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 14 00:59:49.493: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 14 00:59:51.501: INFO: The status of Pod netserver-1 is Running (Ready = false)
Feb 14 00:59:53.510: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 14 01:00:01.672: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5781 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 01:00:01.672: INFO: >>> kubeConfig: /root/.kube/config
I0214 01:00:01.772873       9 log.go:172] (0xc002b85970) (0xc000342640) Create stream
I0214 01:00:01.773035       9 log.go:172] (0xc002b85970) (0xc000342640) Stream added, broadcasting: 1
I0214 01:00:01.781291       9 log.go:172] (0xc002b85970) Reply frame received for 1
I0214 01:00:01.781501       9 log.go:172] (0xc002b85970) (0xc001b039a0) Create stream
I0214 01:00:01.781525       9 log.go:172] (0xc002b85970) (0xc001b039a0) Stream added, broadcasting: 3
I0214 01:00:01.784248       9 log.go:172] (0xc002b85970) Reply frame received for 3
I0214 01:00:01.784300       9 log.go:172] (0xc002b85970) (0xc000126dc0) Create stream
I0214 01:00:01.784318       9 log.go:172] (0xc002b85970) (0xc000126dc0) Stream added, broadcasting: 5
I0214 01:00:01.788530       9 log.go:172] (0xc002b85970) Reply frame received for 5
I0214 01:00:01.959616       9 log.go:172] (0xc002b85970) Data frame received for 3
I0214 01:00:01.960172       9 log.go:172] (0xc001b039a0) (3) Data frame handling
I0214 01:00:01.960223       9 log.go:172] (0xc001b039a0) (3) Data frame sent
I0214 01:00:02.093379       9 log.go:172] (0xc002b85970) Data frame received for 1
I0214 01:00:02.093601       9 log.go:172] (0xc002b85970) (0xc001b039a0) Stream removed, broadcasting: 3
I0214 01:00:02.093715       9 log.go:172] (0xc000342640) (1) Data frame handling
I0214 01:00:02.093762       9 log.go:172] (0xc000342640) (1) Data frame sent
I0214 01:00:02.093848       9 log.go:172] (0xc002b85970) (0xc000126dc0) Stream removed, broadcasting: 5
I0214 01:00:02.093956       9 log.go:172] (0xc002b85970) (0xc000342640) Stream removed, broadcasting: 1
I0214 01:00:02.093992       9 log.go:172] (0xc002b85970) Go away received
I0214 01:00:02.095238       9 log.go:172] (0xc002b85970) (0xc000342640) Stream removed, broadcasting: 1
I0214 01:00:02.095310       9 log.go:172] (0xc002b85970) (0xc001b039a0) Stream removed, broadcasting: 3
I0214 01:00:02.095327       9 log.go:172] (0xc002b85970) (0xc000126dc0) Stream removed, broadcasting: 5
Feb 14 01:00:02.095: INFO: Found all expected endpoints: [netserver-0]
Feb 14 01:00:02.106: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-5781 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 01:00:02.106: INFO: >>> kubeConfig: /root/.kube/config
I0214 01:00:02.161134       9 log.go:172] (0xc002b85ef0) (0xc000b73680) Create stream
I0214 01:00:02.161444       9 log.go:172] (0xc002b85ef0) (0xc000b73680) Stream added, broadcasting: 1
I0214 01:00:02.174511       9 log.go:172] (0xc002b85ef0) Reply frame received for 1
I0214 01:00:02.174687       9 log.go:172] (0xc002b85ef0) (0xc002432500) Create stream
I0214 01:00:02.174711       9 log.go:172] (0xc002b85ef0) (0xc002432500) Stream added, broadcasting: 3
I0214 01:00:02.176802       9 log.go:172] (0xc002b85ef0) Reply frame received for 3
I0214 01:00:02.177053       9 log.go:172] (0xc002b85ef0) (0xc0001b5cc0) Create stream
I0214 01:00:02.177078       9 log.go:172] (0xc002b85ef0) (0xc0001b5cc0) Stream added, broadcasting: 5
I0214 01:00:02.180678       9 log.go:172] (0xc002b85ef0) Reply frame received for 5
I0214 01:00:02.293400       9 log.go:172] (0xc002b85ef0) Data frame received for 3
I0214 01:00:02.294003       9 log.go:172] (0xc002432500) (3) Data frame handling
I0214 01:00:02.294328       9 log.go:172] (0xc002432500) (3) Data frame sent
I0214 01:00:02.384489       9 log.go:172] (0xc002b85ef0) (0xc0001b5cc0) Stream removed, broadcasting: 5
I0214 01:00:02.384708       9 log.go:172] (0xc002b85ef0) Data frame received for 1
I0214 01:00:02.384748       9 log.go:172] (0xc002b85ef0) (0xc002432500) Stream removed, broadcasting: 3
I0214 01:00:02.384797       9 log.go:172] (0xc000b73680) (1) Data frame handling
I0214 01:00:02.384834       9 log.go:172] (0xc000b73680) (1) Data frame sent
I0214 01:00:02.384850       9 log.go:172] (0xc002b85ef0) (0xc000b73680) Stream removed, broadcasting: 1
I0214 01:00:02.384869       9 log.go:172] (0xc002b85ef0) Go away received
I0214 01:00:02.385177       9 log.go:172] (0xc002b85ef0) (0xc000b73680) Stream removed, broadcasting: 1
I0214 01:00:02.385193       9 log.go:172] (0xc002b85ef0) (0xc002432500) Stream removed, broadcasting: 3
I0214 01:00:02.385216       9 log.go:172] (0xc002b85ef0) (0xc0001b5cc0) Stream removed, broadcasting: 5
Feb 14 01:00:02.385: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:00:02.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5781" for this suite.

• [SLOW TEST:35.149 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":160,"skipped":2473,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:00:02.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Feb 14 01:00:21.395: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 01:00:21.439: INFO: Pod pod-with-prestop-http-hook still exists
Feb 14 01:00:23.440: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 01:00:23.465: INFO: Pod pod-with-prestop-http-hook still exists
Feb 14 01:00:25.440: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 01:00:25.447: INFO: Pod pod-with-prestop-http-hook still exists
Feb 14 01:00:27.440: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Feb 14 01:00:27.447: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:00:27.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-4843" for this suite.

• [SLOW TEST:25.103 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":280,"completed":161,"skipped":2482,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:00:27.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 14 01:00:27.620: INFO: Waiting up to 5m0s for pod "downwardapi-volume-e31e8117-a460-4f1a-873c-16ddb98c348e" in namespace "downward-api-6338" to be "success or failure"
Feb 14 01:00:27.637: INFO: Pod "downwardapi-volume-e31e8117-a460-4f1a-873c-16ddb98c348e": Phase="Pending", Reason="", readiness=false. Elapsed: 16.388492ms
Feb 14 01:00:29.644: INFO: Pod "downwardapi-volume-e31e8117-a460-4f1a-873c-16ddb98c348e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0238442s
Feb 14 01:00:31.651: INFO: Pod "downwardapi-volume-e31e8117-a460-4f1a-873c-16ddb98c348e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030420535s
Feb 14 01:00:33.689: INFO: Pod "downwardapi-volume-e31e8117-a460-4f1a-873c-16ddb98c348e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.068174575s
Feb 14 01:00:35.719: INFO: Pod "downwardapi-volume-e31e8117-a460-4f1a-873c-16ddb98c348e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.098919755s
Feb 14 01:00:37.728: INFO: Pod "downwardapi-volume-e31e8117-a460-4f1a-873c-16ddb98c348e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.107662137s
STEP: Saw pod success
Feb 14 01:00:37.728: INFO: Pod "downwardapi-volume-e31e8117-a460-4f1a-873c-16ddb98c348e" satisfied condition "success or failure"
Feb 14 01:00:37.733: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-e31e8117-a460-4f1a-873c-16ddb98c348e container client-container: 
STEP: delete the pod
Feb 14 01:00:37.831: INFO: Waiting for pod downwardapi-volume-e31e8117-a460-4f1a-873c-16ddb98c348e to disappear
Feb 14 01:00:37.843: INFO: Pod downwardapi-volume-e31e8117-a460-4f1a-873c-16ddb98c348e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:00:37.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6338" for this suite.

• [SLOW TEST:10.410 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":162,"skipped":2489,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:00:37.918: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test env composition
Feb 14 01:00:38.006: INFO: Waiting up to 5m0s for pod "var-expansion-b2f2f867-c534-4cb1-882a-69742dacfb4d" in namespace "var-expansion-285" to be "success or failure"
Feb 14 01:00:38.073: INFO: Pod "var-expansion-b2f2f867-c534-4cb1-882a-69742dacfb4d": Phase="Pending", Reason="", readiness=false. Elapsed: 66.511209ms
Feb 14 01:00:40.080: INFO: Pod "var-expansion-b2f2f867-c534-4cb1-882a-69742dacfb4d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07347316s
Feb 14 01:00:42.100: INFO: Pod "var-expansion-b2f2f867-c534-4cb1-882a-69742dacfb4d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093502402s
Feb 14 01:00:44.106: INFO: Pod "var-expansion-b2f2f867-c534-4cb1-882a-69742dacfb4d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.09971031s
Feb 14 01:00:46.115: INFO: Pod "var-expansion-b2f2f867-c534-4cb1-882a-69742dacfb4d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108272308s
Feb 14 01:00:48.122: INFO: Pod "var-expansion-b2f2f867-c534-4cb1-882a-69742dacfb4d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.114916685s
STEP: Saw pod success
Feb 14 01:00:48.122: INFO: Pod "var-expansion-b2f2f867-c534-4cb1-882a-69742dacfb4d" satisfied condition "success or failure"
Feb 14 01:00:48.133: INFO: Trying to get logs from node jerma-node pod var-expansion-b2f2f867-c534-4cb1-882a-69742dacfb4d container dapi-container: 
STEP: delete the pod
Feb 14 01:00:48.267: INFO: Waiting for pod var-expansion-b2f2f867-c534-4cb1-882a-69742dacfb4d to disappear
Feb 14 01:00:48.275: INFO: Pod var-expansion-b2f2f867-c534-4cb1-882a-69742dacfb4d no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:00:48.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-285" for this suite.

• [SLOW TEST:10.369 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":280,"completed":163,"skipped":2520,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:00:48.289: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Feb 14 01:00:48.476: INFO: >>> kubeConfig: /root/.kube/config
Feb 14 01:00:52.174: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:01:06.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8262" for this suite.

• [SLOW TEST:17.777 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":280,"completed":164,"skipped":2528,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:01:06.067: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Feb 14 01:01:06.180: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:01:19.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9965" for this suite.

• [SLOW TEST:13.368 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":280,"completed":165,"skipped":2571,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:01:19.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Feb 14 01:01:19.577: INFO: Waiting up to 5m0s for pod "pod-516e78f3-4a4c-4465-aa14-0807e5f1958f" in namespace "emptydir-348" to be "success or failure"
Feb 14 01:01:19.613: INFO: Pod "pod-516e78f3-4a4c-4465-aa14-0807e5f1958f": Phase="Pending", Reason="", readiness=false. Elapsed: 35.322472ms
Feb 14 01:01:21.622: INFO: Pod "pod-516e78f3-4a4c-4465-aa14-0807e5f1958f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044659483s
Feb 14 01:01:23.688: INFO: Pod "pod-516e78f3-4a4c-4465-aa14-0807e5f1958f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.110408627s
Feb 14 01:01:25.696: INFO: Pod "pod-516e78f3-4a4c-4465-aa14-0807e5f1958f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.118579698s
Feb 14 01:01:27.707: INFO: Pod "pod-516e78f3-4a4c-4465-aa14-0807e5f1958f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.12911177s
STEP: Saw pod success
Feb 14 01:01:27.707: INFO: Pod "pod-516e78f3-4a4c-4465-aa14-0807e5f1958f" satisfied condition "success or failure"
Feb 14 01:01:27.714: INFO: Trying to get logs from node jerma-node pod pod-516e78f3-4a4c-4465-aa14-0807e5f1958f container test-container: 
STEP: delete the pod
Feb 14 01:01:27.766: INFO: Waiting for pod pod-516e78f3-4a4c-4465-aa14-0807e5f1958f to disappear
Feb 14 01:01:27.771: INFO: Pod pod-516e78f3-4a4c-4465-aa14-0807e5f1958f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:01:27.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-348" for this suite.

• [SLOW TEST:8.350 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":166,"skipped":2579,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:01:27.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1598
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 14 01:01:27.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5221'
Feb 14 01:01:31.934: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 14 01:01:31.934: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1604
Feb 14 01:01:33.972: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-5221'
Feb 14 01:01:34.170: INFO: stderr: ""
Feb 14 01:01:34.171: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:01:34.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5221" for this suite.

• [SLOW TEST:6.393 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1592
    should create an rc or deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":280,"completed":167,"skipped":2605,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:01:34.183: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 14 01:01:34.798: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 14 01:01:36.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238894, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238894, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238894, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238894, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:01:38.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238894, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238894, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238894, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238894, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:01:40.888: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238894, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238894, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238894, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238894, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:01:42.848: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238894, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238894, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238894, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717238894, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 14 01:01:45.883: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:01:46.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4553" for this suite.
STEP: Destroying namespace "webhook-4553-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.681 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing validating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":280,"completed":168,"skipped":2607,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:01:46.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating Agnhost RC
Feb 14 01:01:47.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4001'
Feb 14 01:01:47.477: INFO: stderr: ""
Feb 14 01:01:47.477: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 14 01:01:48.489: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:01:48.489: INFO: Found 0 / 1
Feb 14 01:01:49.488: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:01:49.488: INFO: Found 0 / 1
Feb 14 01:01:50.621: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:01:50.621: INFO: Found 0 / 1
Feb 14 01:01:51.486: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:01:51.487: INFO: Found 0 / 1
Feb 14 01:01:52.488: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:01:52.488: INFO: Found 0 / 1
Feb 14 01:01:53.486: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:01:53.487: INFO: Found 0 / 1
Feb 14 01:01:54.491: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:01:54.491: INFO: Found 0 / 1
Feb 14 01:01:55.494: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:01:55.494: INFO: Found 0 / 1
Feb 14 01:01:56.490: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:01:56.490: INFO: Found 0 / 1
Feb 14 01:01:57.489: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:01:57.489: INFO: Found 1 / 1
Feb 14 01:01:57.489: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Feb 14 01:01:57.496: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:01:57.496: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 14 01:01:57.496: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-jgg2k --namespace=kubectl-4001 -p {"metadata":{"annotations":{"x":"y"}}}'
Feb 14 01:01:57.692: INFO: stderr: ""
Feb 14 01:01:57.692: INFO: stdout: "pod/agnhost-master-jgg2k patched\n"
STEP: checking annotations
Feb 14 01:01:57.704: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:01:57.704: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:01:57.704: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4001" for this suite.

• [SLOW TEST:10.854 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1541
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":280,"completed":169,"skipped":2630,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:01:57.723: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name s-test-opt-del-3d1209c2-413d-4dc6-a684-e148931dd70a
STEP: Creating secret with name s-test-opt-upd-81079801-264c-422d-8437-393a910560c5
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-3d1209c2-413d-4dc6-a684-e148931dd70a
STEP: Updating secret s-test-opt-upd-81079801-264c-422d-8437-393a910560c5
STEP: Creating secret with name s-test-opt-create-5508b569-f447-4419-8658-bc78f880f8cd
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:03:40.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7523" for this suite.

• [SLOW TEST:103.066 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":170,"skipped":2677,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:03:40.790: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 14 01:03:40.878: INFO: Waiting up to 5m0s for pod "pod-60317014-d5ba-4d34-baa1-dcd7ab140122" in namespace "emptydir-6591" to be "success or failure"
Feb 14 01:03:40.905: INFO: Pod "pod-60317014-d5ba-4d34-baa1-dcd7ab140122": Phase="Pending", Reason="", readiness=false. Elapsed: 26.445074ms
Feb 14 01:03:42.912: INFO: Pod "pod-60317014-d5ba-4d34-baa1-dcd7ab140122": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033305444s
Feb 14 01:03:44.920: INFO: Pod "pod-60317014-d5ba-4d34-baa1-dcd7ab140122": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041830652s
Feb 14 01:03:47.083: INFO: Pod "pod-60317014-d5ba-4d34-baa1-dcd7ab140122": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204140508s
Feb 14 01:03:49.416: INFO: Pod "pod-60317014-d5ba-4d34-baa1-dcd7ab140122": Phase="Pending", Reason="", readiness=false. Elapsed: 8.537003529s
Feb 14 01:03:52.255: INFO: Pod "pod-60317014-d5ba-4d34-baa1-dcd7ab140122": Phase="Pending", Reason="", readiness=false. Elapsed: 11.376342683s
Feb 14 01:03:54.262: INFO: Pod "pod-60317014-d5ba-4d34-baa1-dcd7ab140122": Phase="Pending", Reason="", readiness=false. Elapsed: 13.383661965s
Feb 14 01:03:56.315: INFO: Pod "pod-60317014-d5ba-4d34-baa1-dcd7ab140122": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.436114657s
STEP: Saw pod success
Feb 14 01:03:56.315: INFO: Pod "pod-60317014-d5ba-4d34-baa1-dcd7ab140122" satisfied condition "success or failure"
Feb 14 01:03:56.322: INFO: Trying to get logs from node jerma-node pod pod-60317014-d5ba-4d34-baa1-dcd7ab140122 container test-container: 
STEP: delete the pod
Feb 14 01:03:56.390: INFO: Waiting for pod pod-60317014-d5ba-4d34-baa1-dcd7ab140122 to disappear
Feb 14 01:03:56.452: INFO: Pod pod-60317014-d5ba-4d34-baa1-dcd7ab140122 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:03:56.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6591" for this suite.

• [SLOW TEST:15.712 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":171,"skipped":2707,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:03:56.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-7xg9g in namespace proxy-2663
I0214 01:03:56.795851       9 runners.go:189] Created replication controller with name: proxy-service-7xg9g, namespace: proxy-2663, replica count: 1
I0214 01:03:57.847935       9 runners.go:189] proxy-service-7xg9g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 01:03:58.849231       9 runners.go:189] proxy-service-7xg9g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 01:03:59.850790       9 runners.go:189] proxy-service-7xg9g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 01:04:00.851750       9 runners.go:189] proxy-service-7xg9g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 01:04:01.852749       9 runners.go:189] proxy-service-7xg9g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 01:04:02.854083       9 runners.go:189] proxy-service-7xg9g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 01:04:03.855310       9 runners.go:189] proxy-service-7xg9g Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 01:04:04.856568       9 runners.go:189] proxy-service-7xg9g Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0214 01:04:05.858230       9 runners.go:189] proxy-service-7xg9g Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0214 01:04:06.859025       9 runners.go:189] proxy-service-7xg9g Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0214 01:04:07.859912       9 runners.go:189] proxy-service-7xg9g Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0214 01:04:08.860875       9 runners.go:189] proxy-service-7xg9g Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0214 01:04:09.861806       9 runners.go:189] proxy-service-7xg9g Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Feb 14 01:04:09.870: INFO: setup took 13.223863048s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Feb 14 01:04:09.901: INFO: (0) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 30.132082ms)
Feb 14 01:04:09.902: INFO: (0) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9/proxy/: test (200; 31.203855ms)
Feb 14 01:04:09.904: INFO: (0) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 32.441655ms)
Feb 14 01:04:09.904: INFO: (0) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:1080/proxy/: ... (200; 31.817044ms)
Feb 14 01:04:09.907: INFO: (0) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: test<... (200; 39.858416ms)
Feb 14 01:04:09.912: INFO: (0) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname2/proxy/: bar (200; 40.769431ms)
Feb 14 01:04:09.912: INFO: (0) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 40.72635ms)
Feb 14 01:04:09.912: INFO: (0) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname1/proxy/: foo (200; 41.624283ms)
Feb 14 01:04:09.913: INFO: (0) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname1/proxy/: foo (200; 40.48555ms)
Feb 14 01:04:09.913: INFO: (0) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 40.675982ms)
Feb 14 01:04:09.916: INFO: (0) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname1/proxy/: tls baz (200; 44.889427ms)
Feb 14 01:04:09.916: INFO: (0) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:460/proxy/: tls baz (200; 43.98926ms)
Feb 14 01:04:09.920: INFO: (0) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:462/proxy/: tls qux (200; 48.606393ms)
Feb 14 01:04:09.920: INFO: (0) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname2/proxy/: tls qux (200; 49.249805ms)
Feb 14 01:04:09.936: INFO: (1) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 14.231579ms)
Feb 14 01:04:09.937: INFO: (1) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: test<... (200; 20.986072ms)
Feb 14 01:04:09.943: INFO: (1) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 21.053331ms)
Feb 14 01:04:09.943: INFO: (1) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:1080/proxy/: ... (200; 21.372463ms)
Feb 14 01:04:09.944: INFO: (1) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname1/proxy/: tls baz (200; 21.690524ms)
Feb 14 01:04:09.944: INFO: (1) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 21.644278ms)
Feb 14 01:04:09.944: INFO: (1) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:460/proxy/: tls baz (200; 23.811338ms)
Feb 14 01:04:09.944: INFO: (1) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname2/proxy/: bar (200; 23.142065ms)
Feb 14 01:04:09.944: INFO: (1) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname1/proxy/: foo (200; 23.732787ms)
Feb 14 01:04:09.945: INFO: (1) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 24.16021ms)
Feb 14 01:04:09.946: INFO: (1) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9/proxy/: test (200; 24.583684ms)
Feb 14 01:04:09.946: INFO: (1) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname1/proxy/: foo (200; 24.278786ms)
Feb 14 01:04:09.946: INFO: (1) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:462/proxy/: tls qux (200; 24.585647ms)
Feb 14 01:04:09.946: INFO: (1) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname2/proxy/: tls qux (200; 24.524465ms)
Feb 14 01:04:09.955: INFO: (2) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 8.958575ms)
Feb 14 01:04:09.957: INFO: (2) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 10.075514ms)
Feb 14 01:04:09.957: INFO: (2) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9/proxy/: test (200; 10.57151ms)
Feb 14 01:04:09.959: INFO: (2) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:1080/proxy/: ... (200; 12.361634ms)
Feb 14 01:04:09.959: INFO: (2) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 12.191337ms)
Feb 14 01:04:09.959: INFO: (2) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:462/proxy/: tls qux (200; 12.177183ms)
Feb 14 01:04:09.960: INFO: (2) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 13.244414ms)
Feb 14 01:04:09.960: INFO: (2) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:1080/proxy/: test<... (200; 13.322588ms)
Feb 14 01:04:09.960: INFO: (2) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: test<... (200; 6.354888ms)
Feb 14 01:04:09.985: INFO: (3) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 20.50367ms)
Feb 14 01:04:09.988: INFO: (3) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname1/proxy/: foo (200; 22.928396ms)
Feb 14 01:04:09.989: INFO: (3) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname2/proxy/: bar (200; 23.797074ms)
Feb 14 01:04:09.989: INFO: (3) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 23.855235ms)
Feb 14 01:04:09.989: INFO: (3) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:1080/proxy/: ... (200; 24.4197ms)
Feb 14 01:04:09.990: INFO: (3) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9/proxy/: test (200; 24.924149ms)
Feb 14 01:04:09.990: INFO: (3) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: ... (200; 8.576101ms)
Feb 14 01:04:10.005: INFO: (4) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 12.849833ms)
Feb 14 01:04:10.006: INFO: (4) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:460/proxy/: tls baz (200; 13.154949ms)
Feb 14 01:04:10.006: INFO: (4) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 14.277119ms)
Feb 14 01:04:10.006: INFO: (4) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:462/proxy/: tls qux (200; 14.064754ms)
Feb 14 01:04:10.006: INFO: (4) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 14.000744ms)
Feb 14 01:04:10.007: INFO: (4) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 14.721144ms)
Feb 14 01:04:10.007: INFO: (4) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9/proxy/: test (200; 14.687112ms)
Feb 14 01:04:10.007: INFO: (4) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:1080/proxy/: test<... (200; 15.108734ms)
Feb 14 01:04:10.007: INFO: (4) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: test (200; 9.130428ms)
Feb 14 01:04:10.022: INFO: (5) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname2/proxy/: bar (200; 10.045342ms)
Feb 14 01:04:10.024: INFO: (5) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 12.071053ms)
Feb 14 01:04:10.027: INFO: (5) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname1/proxy/: foo (200; 14.605083ms)
Feb 14 01:04:10.027: INFO: (5) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:462/proxy/: tls qux (200; 14.638116ms)
Feb 14 01:04:10.027: INFO: (5) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname2/proxy/: bar (200; 14.885565ms)
Feb 14 01:04:10.027: INFO: (5) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname1/proxy/: foo (200; 15.247406ms)
Feb 14 01:04:10.027: INFO: (5) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:1080/proxy/: test<... (200; 15.135704ms)
Feb 14 01:04:10.027: INFO: (5) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: ... (200; 15.428519ms)
Feb 14 01:04:10.028: INFO: (5) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 16.754326ms)
Feb 14 01:04:10.035: INFO: (6) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 6.260203ms)
Feb 14 01:04:10.035: INFO: (6) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:1080/proxy/: test<... (200; 6.368449ms)
Feb 14 01:04:10.036: INFO: (6) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:460/proxy/: tls baz (200; 7.012673ms)
Feb 14 01:04:10.037: INFO: (6) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 8.348068ms)
Feb 14 01:04:10.037: INFO: (6) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 8.503152ms)
Feb 14 01:04:10.038: INFO: (6) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:462/proxy/: tls qux (200; 8.892918ms)
Feb 14 01:04:10.050: INFO: (6) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9/proxy/: test (200; 20.895622ms)
Feb 14 01:04:10.053: INFO: (6) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 24.016572ms)
Feb 14 01:04:10.054: INFO: (6) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:1080/proxy/: ... (200; 24.764958ms)
Feb 14 01:04:10.054: INFO: (6) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname2/proxy/: bar (200; 24.775181ms)
Feb 14 01:04:10.054: INFO: (6) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname1/proxy/: foo (200; 25.172761ms)
Feb 14 01:04:10.054: INFO: (6) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: test (200; 7.662482ms)
Feb 14 01:04:10.065: INFO: (7) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:460/proxy/: tls baz (200; 8.24189ms)
Feb 14 01:04:10.065: INFO: (7) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:1080/proxy/: test<... (200; 8.317102ms)
Feb 14 01:04:10.065: INFO: (7) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: ... (200; 11.546456ms)
Feb 14 01:04:10.069: INFO: (7) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 11.56801ms)
Feb 14 01:04:10.069: INFO: (7) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 11.609312ms)
Feb 14 01:04:10.069: INFO: (7) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:462/proxy/: tls qux (200; 11.771095ms)
Feb 14 01:04:10.069: INFO: (7) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 11.586644ms)
Feb 14 01:04:10.071: INFO: (7) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname2/proxy/: bar (200; 14.001401ms)
Feb 14 01:04:10.072: INFO: (7) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname2/proxy/: tls qux (200; 14.768923ms)
Feb 14 01:04:10.072: INFO: (7) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname1/proxy/: foo (200; 14.775958ms)
Feb 14 01:04:10.072: INFO: (7) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname2/proxy/: bar (200; 15.136801ms)
Feb 14 01:04:10.072: INFO: (7) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname1/proxy/: foo (200; 15.03808ms)
Feb 14 01:04:10.074: INFO: (7) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname1/proxy/: tls baz (200; 16.692167ms)
Feb 14 01:04:10.089: INFO: (8) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 14.864047ms)
Feb 14 01:04:10.089: INFO: (8) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:1080/proxy/: ... (200; 15.019763ms)
Feb 14 01:04:10.089: INFO: (8) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:460/proxy/: tls baz (200; 15.077295ms)
Feb 14 01:04:10.090: INFO: (8) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:1080/proxy/: test<... (200; 15.551785ms)
Feb 14 01:04:10.090: INFO: (8) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9/proxy/: test (200; 15.503467ms)
Feb 14 01:04:10.090: INFO: (8) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 15.593866ms)
Feb 14 01:04:10.090: INFO: (8) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 16.023956ms)
Feb 14 01:04:10.090: INFO: (8) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: test (200; 6.912979ms)
Feb 14 01:04:10.101: INFO: (9) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: test<... (200; 10.150141ms)
Feb 14 01:04:10.105: INFO: (9) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 10.447095ms)
Feb 14 01:04:10.105: INFO: (9) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname1/proxy/: foo (200; 11.024749ms)
Feb 14 01:04:10.105: INFO: (9) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:1080/proxy/: ... (200; 10.578771ms)
Feb 14 01:04:10.106: INFO: (9) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname2/proxy/: bar (200; 10.9516ms)
Feb 14 01:04:10.106: INFO: (9) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname1/proxy/: foo (200; 11.73619ms)
Feb 14 01:04:10.106: INFO: (9) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname1/proxy/: tls baz (200; 12.190474ms)
Feb 14 01:04:10.106: INFO: (9) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname2/proxy/: tls qux (200; 12.334839ms)
Feb 14 01:04:10.107: INFO: (9) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname2/proxy/: bar (200; 12.455167ms)
Feb 14 01:04:10.113: INFO: (10) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 5.932217ms)
Feb 14 01:04:10.113: INFO: (10) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:462/proxy/: tls qux (200; 6.053759ms)
Feb 14 01:04:10.113: INFO: (10) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: test (200; 7.105178ms)
Feb 14 01:04:10.115: INFO: (10) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:1080/proxy/: test<... (200; 7.188256ms)
Feb 14 01:04:10.115: INFO: (10) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:1080/proxy/: ... (200; 7.759342ms)
Feb 14 01:04:10.115: INFO: (10) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 7.75054ms)
Feb 14 01:04:10.117: INFO: (10) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 9.937461ms)
Feb 14 01:04:10.117: INFO: (10) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname1/proxy/: foo (200; 10.032767ms)
Feb 14 01:04:10.118: INFO: (10) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname2/proxy/: bar (200; 10.89443ms)
Feb 14 01:04:10.118: INFO: (10) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 11.090298ms)
Feb 14 01:04:10.119: INFO: (10) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname2/proxy/: tls qux (200; 10.985204ms)
Feb 14 01:04:10.119: INFO: (10) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname1/proxy/: foo (200; 11.116791ms)
Feb 14 01:04:10.119: INFO: (10) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:460/proxy/: tls baz (200; 11.167952ms)
Feb 14 01:04:10.119: INFO: (10) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname1/proxy/: tls baz (200; 11.357542ms)
Feb 14 01:04:10.119: INFO: (10) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname2/proxy/: bar (200; 11.897357ms)
Feb 14 01:04:10.131: INFO: (11) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname2/proxy/: tls qux (200; 11.458116ms)
Feb 14 01:04:10.133: INFO: (11) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: test<... (200; 13.91217ms)
Feb 14 01:04:10.134: INFO: (11) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname1/proxy/: tls baz (200; 14.165538ms)
Feb 14 01:04:10.134: INFO: (11) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname2/proxy/: bar (200; 13.905776ms)
Feb 14 01:04:10.134: INFO: (11) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:460/proxy/: tls baz (200; 14.149327ms)
Feb 14 01:04:10.134: INFO: (11) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 14.299946ms)
Feb 14 01:04:10.134: INFO: (11) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9/proxy/: test (200; 14.434647ms)
Feb 14 01:04:10.134: INFO: (11) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:462/proxy/: tls qux (200; 14.219053ms)
Feb 14 01:04:10.134: INFO: (11) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 14.589885ms)
Feb 14 01:04:10.134: INFO: (11) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 14.843782ms)
Feb 14 01:04:10.135: INFO: (11) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 15.479702ms)
Feb 14 01:04:10.135: INFO: (11) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:1080/proxy/: ... (200; 15.13741ms)
Feb 14 01:04:10.135: INFO: (11) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname1/proxy/: foo (200; 15.651325ms)
Feb 14 01:04:10.140: INFO: (12) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 5.090301ms)
Feb 14 01:04:10.146: INFO: (12) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname1/proxy/: foo (200; 10.113329ms)
Feb 14 01:04:10.146: INFO: (12) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname1/proxy/: foo (200; 10.590798ms)
Feb 14 01:04:10.148: INFO: (12) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname2/proxy/: bar (200; 12.810603ms)
Feb 14 01:04:10.148: INFO: (12) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname1/proxy/: tls baz (200; 12.761606ms)
Feb 14 01:04:10.148: INFO: (12) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname2/proxy/: bar (200; 12.824558ms)
Feb 14 01:04:10.149: INFO: (12) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname2/proxy/: tls qux (200; 13.383053ms)
Feb 14 01:04:10.150: INFO: (12) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 14.525913ms)
Feb 14 01:04:10.150: INFO: (12) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:1080/proxy/: test<... (200; 14.457634ms)
Feb 14 01:04:10.150: INFO: (12) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:1080/proxy/: ... (200; 14.68494ms)
Feb 14 01:04:10.150: INFO: (12) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 14.717157ms)
Feb 14 01:04:10.151: INFO: (12) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:462/proxy/: tls qux (200; 14.862117ms)
Feb 14 01:04:10.151: INFO: (12) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9/proxy/: test (200; 14.887767ms)
Feb 14 01:04:10.151: INFO: (12) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:460/proxy/: tls baz (200; 15.206545ms)
Feb 14 01:04:10.151: INFO: (12) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 15.543ms)
Feb 14 01:04:10.152: INFO: (12) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: ... (200; 4.793494ms)
Feb 14 01:04:10.170: INFO: (13) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:1080/proxy/: test<... (200; 17.936517ms)
Feb 14 01:04:10.172: INFO: (13) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9/proxy/: test (200; 20.04248ms)
Feb 14 01:04:10.173: INFO: (13) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname1/proxy/: foo (200; 20.392234ms)
Feb 14 01:04:10.173: INFO: (13) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: test (200; 7.061523ms)
Feb 14 01:04:10.186: INFO: (14) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: ... (200; 9.163084ms)
Feb 14 01:04:10.187: INFO: (14) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 9.748641ms)
Feb 14 01:04:10.187: INFO: (14) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:1080/proxy/: test<... (200; 9.564787ms)
Feb 14 01:04:10.187: INFO: (14) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname1/proxy/: foo (200; 10.195851ms)
Feb 14 01:04:10.188: INFO: (14) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:462/proxy/: tls qux (200; 10.601383ms)
Feb 14 01:04:10.190: INFO: (14) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname2/proxy/: bar (200; 12.399811ms)
Feb 14 01:04:10.190: INFO: (14) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname2/proxy/: bar (200; 12.535387ms)
Feb 14 01:04:10.190: INFO: (14) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname2/proxy/: tls qux (200; 13.358572ms)
Feb 14 01:04:10.191: INFO: (14) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname1/proxy/: tls baz (200; 12.921615ms)
Feb 14 01:04:10.191: INFO: (14) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname1/proxy/: foo (200; 13.486423ms)
Feb 14 01:04:10.195: INFO: (15) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:1080/proxy/: ... (200; 4.075075ms)
Feb 14 01:04:10.197: INFO: (15) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: test<... (200; 6.098277ms)
Feb 14 01:04:10.198: INFO: (15) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:462/proxy/: tls qux (200; 6.256944ms)
Feb 14 01:04:10.198: INFO: (15) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname2/proxy/: tls qux (200; 7.235585ms)
Feb 14 01:04:10.200: INFO: (15) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:460/proxy/: tls baz (200; 8.592209ms)
Feb 14 01:04:10.201: INFO: (15) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 9.15539ms)
Feb 14 01:04:10.201: INFO: (15) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 9.533243ms)
Feb 14 01:04:10.201: INFO: (15) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9/proxy/: test (200; 9.80335ms)
Feb 14 01:04:10.201: INFO: (15) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname1/proxy/: tls baz (200; 9.579816ms)
Feb 14 01:04:10.201: INFO: (15) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname1/proxy/: foo (200; 9.7814ms)
Feb 14 01:04:10.201: INFO: (15) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname2/proxy/: bar (200; 9.688752ms)
Feb 14 01:04:10.201: INFO: (15) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname1/proxy/: foo (200; 10.351382ms)
Feb 14 01:04:10.202: INFO: (15) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname2/proxy/: bar (200; 10.065212ms)
Feb 14 01:04:10.203: INFO: (15) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 11.506313ms)
Feb 14 01:04:10.210: INFO: (16) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 6.705548ms)
Feb 14 01:04:10.210: INFO: (16) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:460/proxy/: tls baz (200; 6.863198ms)
Feb 14 01:04:10.210: INFO: (16) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: test (200; 8.157931ms)
Feb 14 01:04:10.211: INFO: (16) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:1080/proxy/: ... (200; 8.110604ms)
Feb 14 01:04:10.211: INFO: (16) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 8.367523ms)
Feb 14 01:04:10.215: INFO: (16) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname2/proxy/: bar (200; 12.104493ms)
Feb 14 01:04:10.215: INFO: (16) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:1080/proxy/: test<... (200; 11.977286ms)
Feb 14 01:04:10.216: INFO: (16) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname2/proxy/: bar (200; 12.560013ms)
Feb 14 01:04:10.216: INFO: (16) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:462/proxy/: tls qux (200; 12.764065ms)
Feb 14 01:04:10.216: INFO: (16) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname1/proxy/: foo (200; 12.725744ms)
Feb 14 01:04:10.216: INFO: (16) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname1/proxy/: foo (200; 12.726179ms)
Feb 14 01:04:10.216: INFO: (16) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname2/proxy/: tls qux (200; 12.708409ms)
Feb 14 01:04:10.216: INFO: (16) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname1/proxy/: tls baz (200; 12.860905ms)
Feb 14 01:04:10.224: INFO: (17) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9/proxy/: test (200; 7.522042ms)
Feb 14 01:04:10.224: INFO: (17) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname2/proxy/: bar (200; 8.382436ms)
Feb 14 01:04:10.225: INFO: (17) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:1080/proxy/: ... (200; 8.013945ms)
Feb 14 01:04:10.225: INFO: (17) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: test<... (200; 8.410906ms)
Feb 14 01:04:10.226: INFO: (17) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 9.403742ms)
Feb 14 01:04:10.226: INFO: (17) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:460/proxy/: tls baz (200; 10.546762ms)
Feb 14 01:04:10.227: INFO: (17) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname1/proxy/: foo (200; 10.621688ms)
Feb 14 01:04:10.227: INFO: (17) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 9.6429ms)
Feb 14 01:04:10.227: INFO: (17) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname2/proxy/: bar (200; 10.540521ms)
Feb 14 01:04:10.228: INFO: (17) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname1/proxy/: foo (200; 11.118068ms)
Feb 14 01:04:10.228: INFO: (17) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname2/proxy/: tls qux (200; 11.203603ms)
Feb 14 01:04:10.229: INFO: (17) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 12.471008ms)
Feb 14 01:04:10.240: INFO: (18) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 10.168632ms)
Feb 14 01:04:10.242: INFO: (18) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:1080/proxy/: ... (200; 11.861385ms)
Feb 14 01:04:10.242: INFO: (18) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname1/proxy/: tls baz (200; 12.074269ms)
Feb 14 01:04:10.246: INFO: (18) /api/v1/namespaces/proxy-2663/services/proxy-service-7xg9g:portname2/proxy/: bar (200; 15.992816ms)
Feb 14 01:04:10.247: INFO: (18) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:1080/proxy/: test<... (200; 16.512778ms)
Feb 14 01:04:10.247: INFO: (18) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:460/proxy/: tls baz (200; 16.629384ms)
Feb 14 01:04:10.247: INFO: (18) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9/proxy/: test (200; 17.213431ms)
Feb 14 01:04:10.247: INFO: (18) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 17.364289ms)
Feb 14 01:04:10.247: INFO: (18) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 17.112603ms)
Feb 14 01:04:10.247: INFO: (18) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:162/proxy/: bar (200; 17.443841ms)
Feb 14 01:04:10.247: INFO: (18) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname1/proxy/: foo (200; 17.300364ms)
Feb 14 01:04:10.247: INFO: (18) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:462/proxy/: tls qux (200; 17.289443ms)
Feb 14 01:04:10.247: INFO: (18) /api/v1/namespaces/proxy-2663/services/https:proxy-service-7xg9g:tlsportname2/proxy/: tls qux (200; 17.367135ms)
Feb 14 01:04:10.247: INFO: (18) /api/v1/namespaces/proxy-2663/services/http:proxy-service-7xg9g:portname2/proxy/: bar (200; 17.371802ms)
Feb 14 01:04:10.247: INFO: (18) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: test<... (200; 10.344584ms)
Feb 14 01:04:10.259: INFO: (19) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:1080/proxy/: ... (200; 10.585267ms)
Feb 14 01:04:10.260: INFO: (19) /api/v1/namespaces/proxy-2663/pods/http:proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 11.549135ms)
Feb 14 01:04:10.260: INFO: (19) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9/proxy/: test (200; 11.874959ms)
Feb 14 01:04:10.260: INFO: (19) /api/v1/namespaces/proxy-2663/pods/proxy-service-7xg9g-g5kr9:160/proxy/: foo (200; 11.661683ms)
Feb 14 01:04:10.260: INFO: (19) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:460/proxy/: tls baz (200; 11.741063ms)
Feb 14 01:04:10.260: INFO: (19) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:462/proxy/: tls qux (200; 12.006182ms)
Feb 14 01:04:10.260: INFO: (19) /api/v1/namespaces/proxy-2663/pods/https:proxy-service-7xg9g-g5kr9:443/proxy/: >> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:04:23.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7717" for this suite.
•{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":280,"completed":173,"skipped":2756,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:04:23.934: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 14 01:04:35.270: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:04:35.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8921" for this suite.

• [SLOW TEST:11.493 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":174,"skipped":2764,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:04:35.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 14 01:04:36.468: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 14 01:04:38.493: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239076, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239076, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239076, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239076, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:04:40.511: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239076, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239076, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239076, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239076, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:04:42.504: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239076, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239076, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239076, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239076, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 14 01:04:45.545: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:04:45.553: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3078-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:04:46.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8094" for this suite.
STEP: Destroying namespace "webhook-8094-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:11.687 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":280,"completed":175,"skipped":2796,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:04:47.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service endpoint-test2 in namespace services-4341
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4341 to expose endpoints map[]
Feb 14 01:04:47.313: INFO: successfully validated that service endpoint-test2 in namespace services-4341 exposes endpoints map[] (13.05886ms elapsed)
STEP: Creating pod pod1 in namespace services-4341
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4341 to expose endpoints map[pod1:[80]]
Feb 14 01:04:51.607: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.275524657s elapsed, will retry)
Feb 14 01:04:57.887: INFO: successfully validated that service endpoint-test2 in namespace services-4341 exposes endpoints map[pod1:[80]] (10.555246161s elapsed)
STEP: Creating pod pod2 in namespace services-4341
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4341 to expose endpoints map[pod1:[80] pod2:[80]]
Feb 14 01:05:02.122: INFO: Unexpected endpoints: found map[eff57dde-f502-4529-8d2a-a95b1345d527:[80]], expected map[pod1:[80] pod2:[80]] (4.225328398s elapsed, will retry)
Feb 14 01:05:05.252: INFO: successfully validated that service endpoint-test2 in namespace services-4341 exposes endpoints map[pod1:[80] pod2:[80]] (7.355454001s elapsed)
STEP: Deleting pod pod1 in namespace services-4341
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4341 to expose endpoints map[pod2:[80]]
Feb 14 01:05:06.370: INFO: successfully validated that service endpoint-test2 in namespace services-4341 exposes endpoints map[pod2:[80]] (1.113365974s elapsed)
STEP: Deleting pod pod2 in namespace services-4341
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4341 to expose endpoints map[]
Feb 14 01:05:08.104: INFO: successfully validated that service endpoint-test2 in namespace services-4341 exposes endpoints map[] (1.728384734s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:05:08.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4341" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:21.806 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":280,"completed":176,"skipped":2809,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:05:08.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-2e4b7ded-6b5e-474d-b2f9-b810b7e4c974
STEP: Creating a pod to test consume configMaps
Feb 14 01:05:09.749: INFO: Waiting up to 5m0s for pod "pod-configmaps-ab5c48a1-4957-4093-b948-b876fd7dd4da" in namespace "configmap-958" to be "success or failure"
Feb 14 01:05:09.864: INFO: Pod "pod-configmaps-ab5c48a1-4957-4093-b948-b876fd7dd4da": Phase="Pending", Reason="", readiness=false. Elapsed: 114.888746ms
Feb 14 01:05:11.882: INFO: Pod "pod-configmaps-ab5c48a1-4957-4093-b948-b876fd7dd4da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.132609759s
Feb 14 01:05:13.891: INFO: Pod "pod-configmaps-ab5c48a1-4957-4093-b948-b876fd7dd4da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142224533s
Feb 14 01:05:15.899: INFO: Pod "pod-configmaps-ab5c48a1-4957-4093-b948-b876fd7dd4da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.150471543s
Feb 14 01:05:17.956: INFO: Pod "pod-configmaps-ab5c48a1-4957-4093-b948-b876fd7dd4da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.206964231s
STEP: Saw pod success
Feb 14 01:05:17.956: INFO: Pod "pod-configmaps-ab5c48a1-4957-4093-b948-b876fd7dd4da" satisfied condition "success or failure"
Feb 14 01:05:17.962: INFO: Trying to get logs from node jerma-node pod pod-configmaps-ab5c48a1-4957-4093-b948-b876fd7dd4da container configmap-volume-test: 
STEP: delete the pod
Feb 14 01:05:18.032: INFO: Waiting for pod pod-configmaps-ab5c48a1-4957-4093-b948-b876fd7dd4da to disappear
Feb 14 01:05:18.037: INFO: Pod pod-configmaps-ab5c48a1-4957-4093-b948-b876fd7dd4da no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:05:18.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-958" for this suite.

• [SLOW TEST:9.157 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":177,"skipped":2818,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:05:18.085: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 14 01:05:27.862: INFO: Successfully updated pod "labelsupdate1099d80f-bc95-44ff-8d23-29c9efbaeab1"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:05:30.054: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8766" for this suite.

• [SLOW TEST:11.983 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":280,"completed":178,"skipped":2847,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:05:30.070: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-9917
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 14 01:05:30.246: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 14 01:05:30.330: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 01:05:32.621: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 01:05:34.810: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 01:05:36.429: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 01:05:38.875: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 01:05:40.345: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 01:05:42.336: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 01:05:44.340: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 01:05:46.338: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 01:05:48.340: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 01:05:50.349: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 01:05:52.347: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 01:05:54.341: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 01:05:56.339: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 01:05:58.340: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 01:06:00.340: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 14 01:06:00.352: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 14 01:06:08.428: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=udp&host=10.44.0.2&port=8081&tries=1'] Namespace:pod-network-test-9917 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 01:06:08.428: INFO: >>> kubeConfig: /root/.kube/config
I0214 01:06:08.494695       9 log.go:172] (0xc002e84420) (0xc002973400) Create stream
I0214 01:06:08.494985       9 log.go:172] (0xc002e84420) (0xc002973400) Stream added, broadcasting: 1
I0214 01:06:08.501353       9 log.go:172] (0xc002e84420) Reply frame received for 1
I0214 01:06:08.501436       9 log.go:172] (0xc002e84420) (0xc002973540) Create stream
I0214 01:06:08.501458       9 log.go:172] (0xc002e84420) (0xc002973540) Stream added, broadcasting: 3
I0214 01:06:08.513907       9 log.go:172] (0xc002e84420) Reply frame received for 3
I0214 01:06:08.514026       9 log.go:172] (0xc002e84420) (0xc002973680) Create stream
I0214 01:06:08.514085       9 log.go:172] (0xc002e84420) (0xc002973680) Stream added, broadcasting: 5
I0214 01:06:08.518339       9 log.go:172] (0xc002e84420) Reply frame received for 5
I0214 01:06:08.635751       9 log.go:172] (0xc002e84420) Data frame received for 3
I0214 01:06:08.635841       9 log.go:172] (0xc002973540) (3) Data frame handling
I0214 01:06:08.635864       9 log.go:172] (0xc002973540) (3) Data frame sent
I0214 01:06:08.716207       9 log.go:172] (0xc002e84420) (0xc002973540) Stream removed, broadcasting: 3
I0214 01:06:08.716388       9 log.go:172] (0xc002e84420) Data frame received for 1
I0214 01:06:08.716402       9 log.go:172] (0xc002973400) (1) Data frame handling
I0214 01:06:08.716424       9 log.go:172] (0xc002973400) (1) Data frame sent
I0214 01:06:08.716434       9 log.go:172] (0xc002e84420) (0xc002973400) Stream removed, broadcasting: 1
I0214 01:06:08.717371       9 log.go:172] (0xc002e84420) (0xc002973680) Stream removed, broadcasting: 5
I0214 01:06:08.717638       9 log.go:172] (0xc002e84420) Go away received
I0214 01:06:08.717734       9 log.go:172] (0xc002e84420) (0xc002973400) Stream removed, broadcasting: 1
I0214 01:06:08.717780       9 log.go:172] (0xc002e84420) (0xc002973540) Stream removed, broadcasting: 3
I0214 01:06:08.718163       9 log.go:172] (0xc002e84420) (0xc002973680) Stream removed, broadcasting: 5
Feb 14 01:06:08.718: INFO: Waiting for responses: map[]
Feb 14 01:06:08.774: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-9917 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 01:06:08.774: INFO: >>> kubeConfig: /root/.kube/config
I0214 01:06:08.878979       9 log.go:172] (0xc002b85600) (0xc002432be0) Create stream
I0214 01:06:08.879365       9 log.go:172] (0xc002b85600) (0xc002432be0) Stream added, broadcasting: 1
I0214 01:06:08.887414       9 log.go:172] (0xc002b85600) Reply frame received for 1
I0214 01:06:08.887664       9 log.go:172] (0xc002b85600) (0xc0029737c0) Create stream
I0214 01:06:08.887692       9 log.go:172] (0xc002b85600) (0xc0029737c0) Stream added, broadcasting: 3
I0214 01:06:08.893855       9 log.go:172] (0xc002b85600) Reply frame received for 3
I0214 01:06:08.893973       9 log.go:172] (0xc002b85600) (0xc002432c80) Create stream
I0214 01:06:08.894012       9 log.go:172] (0xc002b85600) (0xc002432c80) Stream added, broadcasting: 5
I0214 01:06:08.895984       9 log.go:172] (0xc002b85600) Reply frame received for 5
I0214 01:06:09.026941       9 log.go:172] (0xc002b85600) Data frame received for 3
I0214 01:06:09.027120       9 log.go:172] (0xc0029737c0) (3) Data frame handling
I0214 01:06:09.027174       9 log.go:172] (0xc0029737c0) (3) Data frame sent
I0214 01:06:09.148776       9 log.go:172] (0xc002b85600) (0xc0029737c0) Stream removed, broadcasting: 3
I0214 01:06:09.149116       9 log.go:172] (0xc002b85600) Data frame received for 1
I0214 01:06:09.149139       9 log.go:172] (0xc002432be0) (1) Data frame handling
I0214 01:06:09.149178       9 log.go:172] (0xc002432be0) (1) Data frame sent
I0214 01:06:09.149192       9 log.go:172] (0xc002b85600) (0xc002432be0) Stream removed, broadcasting: 1
I0214 01:06:09.149509       9 log.go:172] (0xc002b85600) (0xc002432c80) Stream removed, broadcasting: 5
I0214 01:06:09.149583       9 log.go:172] (0xc002b85600) (0xc002432be0) Stream removed, broadcasting: 1
I0214 01:06:09.149594       9 log.go:172] (0xc002b85600) (0xc0029737c0) Stream removed, broadcasting: 3
I0214 01:06:09.149618       9 log.go:172] (0xc002b85600) (0xc002432c80) Stream removed, broadcasting: 5
I0214 01:06:09.150709       9 log.go:172] (0xc002b85600) Go away received
Feb 14 01:06:09.151: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:06:09.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9917" for this suite.

• [SLOW TEST:39.097 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: udp [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":280,"completed":179,"skipped":2875,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:06:09.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-b763d288-88dc-4382-831d-58717a928590
STEP: Creating a pod to test consume secrets
Feb 14 01:06:09.321: INFO: Waiting up to 5m0s for pod "pod-secrets-81e8eed0-1aec-4aeb-96ea-e58b7723bb24" in namespace "secrets-1594" to be "success or failure"
Feb 14 01:06:09.368: INFO: Pod "pod-secrets-81e8eed0-1aec-4aeb-96ea-e58b7723bb24": Phase="Pending", Reason="", readiness=false. Elapsed: 46.939131ms
Feb 14 01:06:11.670: INFO: Pod "pod-secrets-81e8eed0-1aec-4aeb-96ea-e58b7723bb24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.348851157s
Feb 14 01:06:13.679: INFO: Pod "pod-secrets-81e8eed0-1aec-4aeb-96ea-e58b7723bb24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.358498367s
Feb 14 01:06:15.749: INFO: Pod "pod-secrets-81e8eed0-1aec-4aeb-96ea-e58b7723bb24": Phase="Pending", Reason="", readiness=false. Elapsed: 6.428346459s
Feb 14 01:06:17.765: INFO: Pod "pod-secrets-81e8eed0-1aec-4aeb-96ea-e58b7723bb24": Phase="Pending", Reason="", readiness=false. Elapsed: 8.44382707s
Feb 14 01:06:19.775: INFO: Pod "pod-secrets-81e8eed0-1aec-4aeb-96ea-e58b7723bb24": Phase="Pending", Reason="", readiness=false. Elapsed: 10.45395148s
Feb 14 01:06:21.785: INFO: Pod "pod-secrets-81e8eed0-1aec-4aeb-96ea-e58b7723bb24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.463781222s
STEP: Saw pod success
Feb 14 01:06:21.785: INFO: Pod "pod-secrets-81e8eed0-1aec-4aeb-96ea-e58b7723bb24" satisfied condition "success or failure"
Feb 14 01:06:21.793: INFO: Trying to get logs from node jerma-node pod pod-secrets-81e8eed0-1aec-4aeb-96ea-e58b7723bb24 container secret-volume-test: 
STEP: delete the pod
Feb 14 01:06:21.860: INFO: Waiting for pod pod-secrets-81e8eed0-1aec-4aeb-96ea-e58b7723bb24 to disappear
Feb 14 01:06:21.872: INFO: Pod pod-secrets-81e8eed0-1aec-4aeb-96ea-e58b7723bb24 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:06:21.872: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1594" for this suite.

• [SLOW TEST:12.808 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":280,"completed":180,"skipped":2882,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:06:21.978: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1694
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 14 01:06:22.047: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-3447'
Feb 14 01:06:22.255: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 14 01:06:22.255: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: rolling-update to same image controller
Feb 14 01:06:22.276: INFO: scanned /root for discovery docs: 
Feb 14 01:06:22.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-3447'
Feb 14 01:06:44.626: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 14 01:06:44.626: INFO: stdout: "Created e2e-test-httpd-rc-eadd1d813a7105898ba416509da8d96a\nScaling up e2e-test-httpd-rc-eadd1d813a7105898ba416509da8d96a from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-eadd1d813a7105898ba416509da8d96a up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-eadd1d813a7105898ba416509da8d96a to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Feb 14 01:06:44.626: INFO: stdout: "Created e2e-test-httpd-rc-eadd1d813a7105898ba416509da8d96a\nScaling up e2e-test-httpd-rc-eadd1d813a7105898ba416509da8d96a from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-eadd1d813a7105898ba416509da8d96a up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-eadd1d813a7105898ba416509da8d96a to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Feb 14 01:06:44.627: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-3447'
Feb 14 01:06:44.736: INFO: stderr: ""
Feb 14 01:06:44.736: INFO: stdout: "e2e-test-httpd-rc-eadd1d813a7105898ba416509da8d96a-sk4bk "
Feb 14 01:06:44.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-eadd1d813a7105898ba416509da8d96a-sk4bk -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-3447'
Feb 14 01:06:44.824: INFO: stderr: ""
Feb 14 01:06:44.825: INFO: stdout: "true"
Feb 14 01:06:44.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-eadd1d813a7105898ba416509da8d96a-sk4bk -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-3447'
Feb 14 01:06:44.936: INFO: stderr: ""
Feb 14 01:06:44.936: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Feb 14 01:06:44.936: INFO: e2e-test-httpd-rc-eadd1d813a7105898ba416509da8d96a-sk4bk is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700
Feb 14 01:06:44.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-3447'
Feb 14 01:06:45.079: INFO: stderr: ""
Feb 14 01:06:45.080: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:06:45.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3447" for this suite.

• [SLOW TEST:23.175 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1689
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":280,"completed":181,"skipped":2885,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:06:45.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 14 01:06:45.269: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 14 01:06:45.282: INFO: Waiting for terminating namespaces to be deleted...
Feb 14 01:06:45.285: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 14 01:06:45.290: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 14 01:06:45.290: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 01:06:45.290: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 14 01:06:45.290: INFO: 	Container weave ready: true, restart count 1
Feb 14 01:06:45.290: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 01:06:45.290: INFO: e2e-test-httpd-rc-eadd1d813a7105898ba416509da8d96a-sk4bk from kubectl-3447 started at 2020-02-14 01:06:24 +0000 UTC (1 container statuses recorded)
Feb 14 01:06:45.290: INFO: 	Container e2e-test-httpd-rc ready: true, restart count 0
Feb 14 01:06:45.290: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 14 01:06:45.309: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 14 01:06:45.309: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 01:06:45.309: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 14 01:06:45.309: INFO: 	Container weave ready: true, restart count 0
Feb 14 01:06:45.309: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 01:06:45.309: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 14 01:06:45.309: INFO: 	Container kube-controller-manager ready: true, restart count 7
Feb 14 01:06:45.309: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 14 01:06:45.309: INFO: 	Container kube-scheduler ready: true, restart count 11
Feb 14 01:06:45.309: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 14 01:06:45.309: INFO: 	Container etcd ready: true, restart count 1
Feb 14 01:06:45.309: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 14 01:06:45.309: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 14 01:06:45.309: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 14 01:06:45.309: INFO: 	Container coredns ready: true, restart count 0
Feb 14 01:06:45.309: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 14 01:06:45.309: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-61442716-bb01-46ce-96c9-9c6272458a14 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-61442716-bb01-46ce-96c9-9c6272458a14 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-61442716-bb01-46ce-96c9-9c6272458a14
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:07:21.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9317" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:36.518 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":280,"completed":182,"skipped":2901,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:07:21.675: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-4566
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Feb 14 01:07:21.719: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
Feb 14 01:07:21.802: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 01:07:23.839: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 01:07:25.808: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 01:07:28.219: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 01:07:29.897: INFO: The status of Pod netserver-0 is Pending, waiting for it to be Running (with Ready = true)
Feb 14 01:07:31.807: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 01:07:34.326: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 01:07:35.811: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 01:07:37.811: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 01:07:39.811: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 01:07:41.813: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 01:07:43.813: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 01:07:45.811: INFO: The status of Pod netserver-0 is Running (Ready = false)
Feb 14 01:07:47.811: INFO: The status of Pod netserver-0 is Running (Ready = true)
Feb 14 01:07:47.820: INFO: The status of Pod netserver-1 is Running (Ready = true)
STEP: Creating test pods
Feb 14 01:07:55.914: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=http&host=10.44.0.4&port=8080&tries=1'] Namespace:pod-network-test-4566 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 01:07:55.915: INFO: >>> kubeConfig: /root/.kube/config
I0214 01:07:55.970321       9 log.go:172] (0xc002bd4000) (0xc002a32780) Create stream
I0214 01:07:55.970462       9 log.go:172] (0xc002bd4000) (0xc002a32780) Stream added, broadcasting: 1
I0214 01:07:55.974928       9 log.go:172] (0xc002bd4000) Reply frame received for 1
I0214 01:07:55.974991       9 log.go:172] (0xc002bd4000) (0xc002e3c460) Create stream
I0214 01:07:55.975007       9 log.go:172] (0xc002bd4000) (0xc002e3c460) Stream added, broadcasting: 3
I0214 01:07:55.977284       9 log.go:172] (0xc002bd4000) Reply frame received for 3
I0214 01:07:55.977317       9 log.go:172] (0xc002bd4000) (0xc002a32820) Create stream
I0214 01:07:55.977332       9 log.go:172] (0xc002bd4000) (0xc002a32820) Stream added, broadcasting: 5
I0214 01:07:55.979104       9 log.go:172] (0xc002bd4000) Reply frame received for 5
I0214 01:07:56.129350       9 log.go:172] (0xc002bd4000) Data frame received for 3
I0214 01:07:56.129571       9 log.go:172] (0xc002e3c460) (3) Data frame handling
I0214 01:07:56.129635       9 log.go:172] (0xc002e3c460) (3) Data frame sent
I0214 01:07:56.219127       9 log.go:172] (0xc002bd4000) Data frame received for 1
I0214 01:07:56.219249       9 log.go:172] (0xc002a32780) (1) Data frame handling
I0214 01:07:56.219310       9 log.go:172] (0xc002a32780) (1) Data frame sent
I0214 01:07:56.219353       9 log.go:172] (0xc002bd4000) (0xc002a32780) Stream removed, broadcasting: 1
I0214 01:07:56.220420       9 log.go:172] (0xc002bd4000) (0xc002e3c460) Stream removed, broadcasting: 3
I0214 01:07:56.220751       9 log.go:172] (0xc002bd4000) (0xc002a32820) Stream removed, broadcasting: 5
I0214 01:07:56.221009       9 log.go:172] (0xc002bd4000) Go away received
I0214 01:07:56.221280       9 log.go:172] (0xc002bd4000) (0xc002a32780) Stream removed, broadcasting: 1
I0214 01:07:56.221332       9 log.go:172] (0xc002bd4000) (0xc002e3c460) Stream removed, broadcasting: 3
I0214 01:07:56.221361       9 log.go:172] (0xc002bd4000) (0xc002a32820) Stream removed, broadcasting: 5
Feb 14 01:07:56.221: INFO: Waiting for responses: map[]
Feb 14 01:07:56.228: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-4566 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Feb 14 01:07:56.228: INFO: >>> kubeConfig: /root/.kube/config
I0214 01:07:56.269604       9 log.go:172] (0xc001d2e420) (0xc002972820) Create stream
I0214 01:07:56.269814       9 log.go:172] (0xc001d2e420) (0xc002972820) Stream added, broadcasting: 1
I0214 01:07:56.273396       9 log.go:172] (0xc001d2e420) Reply frame received for 1
I0214 01:07:56.273499       9 log.go:172] (0xc001d2e420) (0xc002e3c6e0) Create stream
I0214 01:07:56.273513       9 log.go:172] (0xc001d2e420) (0xc002e3c6e0) Stream added, broadcasting: 3
I0214 01:07:56.275192       9 log.go:172] (0xc001d2e420) Reply frame received for 3
I0214 01:07:56.275211       9 log.go:172] (0xc001d2e420) (0xc002e3c820) Create stream
I0214 01:07:56.275219       9 log.go:172] (0xc001d2e420) (0xc002e3c820) Stream added, broadcasting: 5
I0214 01:07:56.276300       9 log.go:172] (0xc001d2e420) Reply frame received for 5
I0214 01:07:56.344717       9 log.go:172] (0xc001d2e420) Data frame received for 3
I0214 01:07:56.344797       9 log.go:172] (0xc002e3c6e0) (3) Data frame handling
I0214 01:07:56.344824       9 log.go:172] (0xc002e3c6e0) (3) Data frame sent
I0214 01:07:56.410239       9 log.go:172] (0xc001d2e420) Data frame received for 1
I0214 01:07:56.410523       9 log.go:172] (0xc001d2e420) (0xc002e3c6e0) Stream removed, broadcasting: 3
I0214 01:07:56.410660       9 log.go:172] (0xc002972820) (1) Data frame handling
I0214 01:07:56.410877       9 log.go:172] (0xc002972820) (1) Data frame sent
I0214 01:07:56.411069       9 log.go:172] (0xc001d2e420) (0xc002e3c820) Stream removed, broadcasting: 5
I0214 01:07:56.411189       9 log.go:172] (0xc001d2e420) (0xc002972820) Stream removed, broadcasting: 1
I0214 01:07:56.411239       9 log.go:172] (0xc001d2e420) Go away received
I0214 01:07:56.411744       9 log.go:172] (0xc001d2e420) (0xc002972820) Stream removed, broadcasting: 1
I0214 01:07:56.411793       9 log.go:172] (0xc001d2e420) (0xc002e3c6e0) Stream removed, broadcasting: 3
I0214 01:07:56.411815       9 log.go:172] (0xc001d2e420) (0xc002e3c820) Stream removed, broadcasting: 5
Feb 14 01:07:56.412: INFO: Waiting for responses: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:07:56.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-4566" for this suite.

• [SLOW TEST:34.751 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for intra-pod communication: http [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":280,"completed":183,"skipped":2934,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:07:56.428: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:07:56.524: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Feb 14 01:08:01.551: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Feb 14 01:08:07.570: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 14 01:08:15.635: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-470 /apis/apps/v1/namespaces/deployment-470/deployments/test-cleanup-deployment 940ad3f4-8d42-41a0-a4fc-5d8dca831e8a 8282672 1 2020-02-14 01:08:07 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003308038  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-02-14 01:08:07 +0000 UTC,LastTransitionTime:2020-02-14 01:08:07 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-02-14 01:08:14 +0000 UTC,LastTransitionTime:2020-02-14 01:08:07 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Feb 14 01:08:15.637: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-470 /apis/apps/v1/namespaces/deployment-470/replicasets/test-cleanup-deployment-55ffc6b7b6 0b6f92b0-d468-4b64-b1b1-0ed876a625e3 8282656 1 2020-02-14 01:08:07 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 940ad3f4-8d42-41a0-a4fc-5d8dca831e8a 0xc003709117 0xc003709118}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003709188  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Feb 14 01:08:15.641: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-p6xxx" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-p6xxx test-cleanup-deployment-55ffc6b7b6- deployment-470 /api/v1/namespaces/deployment-470/pods/test-cleanup-deployment-55ffc6b7b6-p6xxx 0a77763a-842a-4fa9-a387-5b26481422c2 8282655 0 2020-02-14 01:08:07 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 0b6f92b0-d468-4b64-b1b1-0ed876a625e3 0xc003709507 0xc003709508}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-wp65k,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-wp65k,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-wp65k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 01:08:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 01:08:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 01:08:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 01:08:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-02-14 01:08:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-02-14 01:08:13 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://819907c317fd1b6beb85a79e9bbbe0cbb4f14142f4ea3a43f3f6bc0d36865cd4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:08:15.641: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-470" for this suite.

• [SLOW TEST:19.223 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":280,"completed":184,"skipped":2945,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:08:15.652: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:08:16.530: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Feb 14 01:08:20.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9493 create -f -'
Feb 14 01:08:23.963: INFO: stderr: ""
Feb 14 01:08:23.963: INFO: stdout: "e2e-test-crd-publish-openapi-2356-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Feb 14 01:08:23.963: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9493 delete e2e-test-crd-publish-openapi-2356-crds test-cr'
Feb 14 01:08:24.137: INFO: stderr: ""
Feb 14 01:08:24.137: INFO: stdout: "e2e-test-crd-publish-openapi-2356-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Feb 14 01:08:24.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9493 apply -f -'
Feb 14 01:08:24.369: INFO: stderr: ""
Feb 14 01:08:24.369: INFO: stdout: "e2e-test-crd-publish-openapi-2356-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Feb 14 01:08:24.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9493 delete e2e-test-crd-publish-openapi-2356-crds test-cr'
Feb 14 01:08:24.463: INFO: stderr: ""
Feb 14 01:08:24.464: INFO: stdout: "e2e-test-crd-publish-openapi-2356-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Feb 14 01:08:24.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2356-crds'
Feb 14 01:08:24.755: INFO: stderr: ""
Feb 14 01:08:24.756: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-2356-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:08:28.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-9493" for this suite.

• [SLOW TEST:13.051 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":280,"completed":185,"skipped":2952,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:08:28.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:08:28.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3452'
Feb 14 01:08:29.980: INFO: stderr: ""
Feb 14 01:08:29.980: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Feb 14 01:08:29.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-3452'
Feb 14 01:08:30.503: INFO: stderr: ""
Feb 14 01:08:30.504: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Feb 14 01:08:31.513: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:08:31.513: INFO: Found 0 / 1
Feb 14 01:08:32.515: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:08:32.516: INFO: Found 0 / 1
Feb 14 01:08:33.519: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:08:33.519: INFO: Found 0 / 1
Feb 14 01:08:34.517: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:08:34.518: INFO: Found 0 / 1
Feb 14 01:08:35.514: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:08:35.514: INFO: Found 0 / 1
Feb 14 01:08:36.832: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:08:36.832: INFO: Found 0 / 1
Feb 14 01:08:37.513: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:08:37.513: INFO: Found 1 / 1
Feb 14 01:08:37.513: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Feb 14 01:08:37.518: INFO: Selector matched 1 pods for map[app:agnhost]
Feb 14 01:08:37.519: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Feb 14 01:08:37.519: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-2xb65 --namespace=kubectl-3452'
Feb 14 01:08:37.732: INFO: stderr: ""
Feb 14 01:08:37.732: INFO: stdout: "Name:         agnhost-master-2xb65\nNamespace:    kubectl-3452\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Fri, 14 Feb 2020 01:08:30 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nIPs:\n  IP:           10.44.0.1\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://a9a16de962cc92852e093919c1be6ea797a17620deec41a60c7372897e560451\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Fri, 14 Feb 2020 01:08:36 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-r6fws (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-r6fws:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-r6fws\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-3452/agnhost-master-2xb65 to jerma-node\n  Normal  Pulled     5s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    2s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    1s         kubelet, jerma-node  Started container agnhost-master\n"
Feb 14 01:08:37.732: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-3452'
Feb 14 01:08:37.877: INFO: stderr: ""
Feb 14 01:08:37.877: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-3452\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  8s    replication-controller  Created pod: agnhost-master-2xb65\n"
Feb 14 01:08:37.877: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-3452'
Feb 14 01:08:38.000: INFO: stderr: ""
Feb 14 01:08:38.000: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-3452\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.199.162\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Feb 14 01:08:38.005: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Feb 14 01:08:38.212: INFO: stderr: ""
Feb 14 01:08:38.212: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Fri, 14 Feb 2020 01:08:34 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Fri, 14 Feb 2020 01:08:12 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Fri, 14 Feb 2020 01:08:12 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Fri, 14 Feb 2020 01:08:12 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Fri, 14 Feb 2020 01:08:12 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         40d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         40d\n  kubectl-3452                agnhost-master-2xb65    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Feb 14 01:08:38.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-3452'
Feb 14 01:08:38.322: INFO: stderr: ""
Feb 14 01:08:38.323: INFO: stdout: "Name:         kubectl-3452\nLabels:       e2e-framework=kubectl\n              e2e-run=dcf9342b-1028-40e0-b4aa-5f90211f3e72\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:08:38.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3452" for this suite.

• [SLOW TEST:9.629 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1156
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":280,"completed":186,"skipped":2962,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:08:38.335: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 14 01:08:51.369: INFO: Successfully updated pod "annotationupdatec6531d3f-6403-4f36-8cb5-4dfcd72557bc"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:08:53.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-878" for this suite.

• [SLOW TEST:15.106 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":187,"skipped":2974,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:08:53.445: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 14 01:08:53.601: INFO: Waiting up to 5m0s for pod "pod-cd37da69-229c-412e-9fc8-9f5ef61719a1" in namespace "emptydir-8932" to be "success or failure"
Feb 14 01:08:53.746: INFO: Pod "pod-cd37da69-229c-412e-9fc8-9f5ef61719a1": Phase="Pending", Reason="", readiness=false. Elapsed: 144.426963ms
Feb 14 01:08:55.757: INFO: Pod "pod-cd37da69-229c-412e-9fc8-9f5ef61719a1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155955994s
Feb 14 01:08:57.838: INFO: Pod "pod-cd37da69-229c-412e-9fc8-9f5ef61719a1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.237191742s
Feb 14 01:08:59.846: INFO: Pod "pod-cd37da69-229c-412e-9fc8-9f5ef61719a1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.244662597s
Feb 14 01:09:01.857: INFO: Pod "pod-cd37da69-229c-412e-9fc8-9f5ef61719a1": Phase="Pending", Reason="", readiness=false. Elapsed: 8.256011973s
Feb 14 01:09:03.870: INFO: Pod "pod-cd37da69-229c-412e-9fc8-9f5ef61719a1": Phase="Pending", Reason="", readiness=false. Elapsed: 10.268801477s
Feb 14 01:09:05.881: INFO: Pod "pod-cd37da69-229c-412e-9fc8-9f5ef61719a1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.279926608s
STEP: Saw pod success
Feb 14 01:09:05.881: INFO: Pod "pod-cd37da69-229c-412e-9fc8-9f5ef61719a1" satisfied condition "success or failure"
Feb 14 01:09:05.887: INFO: Trying to get logs from node jerma-node pod pod-cd37da69-229c-412e-9fc8-9f5ef61719a1 container test-container: 
STEP: delete the pod
Feb 14 01:09:05.927: INFO: Waiting for pod pod-cd37da69-229c-412e-9fc8-9f5ef61719a1 to disappear
Feb 14 01:09:05.932: INFO: Pod pod-cd37da69-229c-412e-9fc8-9f5ef61719a1 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:09:05.932: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8932" for this suite.

• [SLOW TEST:12.497 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":188,"skipped":2994,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:09:05.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 14 01:09:06.751: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 14 01:09:08.766: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239346, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239346, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239346, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239346, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:09:10.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239346, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239346, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239346, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239346, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:09:12.779: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239346, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239346, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239346, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239346, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 14 01:09:15.807: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:09:15.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-27" for this suite.
STEP: Destroying namespace "webhook-27-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.167 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":280,"completed":189,"skipped":3011,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:09:16.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Feb 14 01:09:32.535: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 01:09:32.548: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 01:09:34.549: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 01:09:34.563: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 01:09:36.549: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 01:09:36.567: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 01:09:38.549: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 01:09:38.566: INFO: Pod pod-with-poststart-http-hook still exists
Feb 14 01:09:40.549: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Feb 14 01:09:40.562: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:09:40.563: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-907" for this suite.

• [SLOW TEST:24.468 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":280,"completed":190,"skipped":3019,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:09:40.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Starting the proxy
Feb 14 01:09:40.699: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix278310136/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:09:40.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1693" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":280,"completed":191,"skipped":3042,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:09:40.787: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-f298aa93-44cb-4497-99f2-7f9d75e23286 in namespace container-probe-5717
Feb 14 01:09:50.856: INFO: Started pod liveness-f298aa93-44cb-4497-99f2-7f9d75e23286 in namespace container-probe-5717
STEP: checking the pod's current state and verifying that restartCount is present
Feb 14 01:09:50.861: INFO: Initial restart count of pod liveness-f298aa93-44cb-4497-99f2-7f9d75e23286 is 0
Feb 14 01:10:06.959: INFO: Restart count of pod container-probe-5717/liveness-f298aa93-44cb-4497-99f2-7f9d75e23286 is now 1 (16.097996386s elapsed)
Feb 14 01:10:25.062: INFO: Restart count of pod container-probe-5717/liveness-f298aa93-44cb-4497-99f2-7f9d75e23286 is now 2 (34.200917085s elapsed)
Feb 14 01:10:45.160: INFO: Restart count of pod container-probe-5717/liveness-f298aa93-44cb-4497-99f2-7f9d75e23286 is now 3 (54.298758611s elapsed)
Feb 14 01:11:07.354: INFO: Restart count of pod container-probe-5717/liveness-f298aa93-44cb-4497-99f2-7f9d75e23286 is now 4 (1m16.493447155s elapsed)
Feb 14 01:12:07.794: INFO: Restart count of pod container-probe-5717/liveness-f298aa93-44cb-4497-99f2-7f9d75e23286 is now 5 (2m16.933727441s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:12:07.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-5717" for this suite.

• [SLOW TEST:147.167 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":280,"completed":192,"skipped":3046,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:12:07.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 14 01:12:08.047: INFO: Waiting up to 5m0s for pod "downward-api-4cd7ce00-9adf-49e3-8071-cfab161410e5" in namespace "downward-api-2143" to be "success or failure"
Feb 14 01:12:08.122: INFO: Pod "downward-api-4cd7ce00-9adf-49e3-8071-cfab161410e5": Phase="Pending", Reason="", readiness=false. Elapsed: 74.014025ms
Feb 14 01:12:10.129: INFO: Pod "downward-api-4cd7ce00-9adf-49e3-8071-cfab161410e5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081209865s
Feb 14 01:12:12.258: INFO: Pod "downward-api-4cd7ce00-9adf-49e3-8071-cfab161410e5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.210930088s
Feb 14 01:12:14.268: INFO: Pod "downward-api-4cd7ce00-9adf-49e3-8071-cfab161410e5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.220722344s
Feb 14 01:12:16.279: INFO: Pod "downward-api-4cd7ce00-9adf-49e3-8071-cfab161410e5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.231083833s
STEP: Saw pod success
Feb 14 01:12:16.279: INFO: Pod "downward-api-4cd7ce00-9adf-49e3-8071-cfab161410e5" satisfied condition "success or failure"
Feb 14 01:12:16.282: INFO: Trying to get logs from node jerma-node pod downward-api-4cd7ce00-9adf-49e3-8071-cfab161410e5 container dapi-container: 
STEP: delete the pod
Feb 14 01:12:16.337: INFO: Waiting for pod downward-api-4cd7ce00-9adf-49e3-8071-cfab161410e5 to disappear
Feb 14 01:12:16.348: INFO: Pod downward-api-4cd7ce00-9adf-49e3-8071-cfab161410e5 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:12:16.348: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2143" for this suite.

• [SLOW TEST:8.405 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":280,"completed":193,"skipped":3049,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:12:16.364: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:12:32.997: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-8691" for this suite.

• [SLOW TEST:16.645 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":280,"completed":194,"skipped":3099,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:12:33.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override command
Feb 14 01:12:33.246: INFO: Waiting up to 5m0s for pod "client-containers-b5daac8e-e7bc-4b2f-b51f-3f1be0b5970c" in namespace "containers-8905" to be "success or failure"
Feb 14 01:12:33.256: INFO: Pod "client-containers-b5daac8e-e7bc-4b2f-b51f-3f1be0b5970c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.922357ms
Feb 14 01:12:35.264: INFO: Pod "client-containers-b5daac8e-e7bc-4b2f-b51f-3f1be0b5970c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017036655s
Feb 14 01:12:37.305: INFO: Pod "client-containers-b5daac8e-e7bc-4b2f-b51f-3f1be0b5970c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058553503s
Feb 14 01:12:39.317: INFO: Pod "client-containers-b5daac8e-e7bc-4b2f-b51f-3f1be0b5970c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070530596s
Feb 14 01:12:41.326: INFO: Pod "client-containers-b5daac8e-e7bc-4b2f-b51f-3f1be0b5970c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079315006s
Feb 14 01:12:43.335: INFO: Pod "client-containers-b5daac8e-e7bc-4b2f-b51f-3f1be0b5970c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08848182s
STEP: Saw pod success
Feb 14 01:12:43.336: INFO: Pod "client-containers-b5daac8e-e7bc-4b2f-b51f-3f1be0b5970c" satisfied condition "success or failure"
Feb 14 01:12:43.341: INFO: Trying to get logs from node jerma-node pod client-containers-b5daac8e-e7bc-4b2f-b51f-3f1be0b5970c container test-container: 
STEP: delete the pod
Feb 14 01:12:43.406: INFO: Waiting for pod client-containers-b5daac8e-e7bc-4b2f-b51f-3f1be0b5970c to disappear
Feb 14 01:12:43.420: INFO: Pod client-containers-b5daac8e-e7bc-4b2f-b51f-3f1be0b5970c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:12:43.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8905" for this suite.

• [SLOW TEST:10.440 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":280,"completed":195,"skipped":3191,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:12:43.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:74
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:12:43.518: INFO: Creating deployment "test-recreate-deployment"
Feb 14 01:12:43.525: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Feb 14 01:12:43.539: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Feb 14 01:12:46.119: INFO: Waiting deployment "test-recreate-deployment" to complete
Feb 14 01:12:46.124: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239563, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239563, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:12:48.130: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239563, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239563, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:12:50.131: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239563, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239563, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239563, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:12:52.130: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Feb 14 01:12:52.151: INFO: Updating deployment test-recreate-deployment
Feb 14 01:12:52.151: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
Feb 14 01:12:52.642: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-8069 /apis/apps/v1/namespaces/deployment-8069/deployments/test-recreate-deployment 3ef7e892-907e-4079-8a8c-bc926511731a 8283735 2 2020-02-14 01:12:43 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038ddd48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-02-14 01:12:52 +0000 UTC,LastTransitionTime:2020-02-14 01:12:52 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-02-14 01:12:52 +0000 UTC,LastTransitionTime:2020-02-14 01:12:43 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Feb 14 01:12:52.693: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-8069 /apis/apps/v1/namespaces/deployment-8069/replicasets/test-recreate-deployment-5f94c574ff c1aa9fd5-ad43-4bca-be98-9748f2008f83 8283733 1 2020-02-14 01:12:52 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 3ef7e892-907e-4079-8a8c-bc926511731a 0xc0026a2707 0xc0026a2708}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0026a2768  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 14 01:12:52.693: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Feb 14 01:12:52.693: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-8069 /apis/apps/v1/namespaces/deployment-8069/replicasets/test-recreate-deployment-799c574856 669fe917-24b1-46cb-b453-ea5c1ea54341 8283724 2 2020-02-14 01:12:43 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 3ef7e892-907e-4079-8a8c-bc926511731a 0xc0026a27d7 0xc0026a27d8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0026a2848  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Feb 14 01:12:52.698: INFO: Pod "test-recreate-deployment-5f94c574ff-2qc6n" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-2qc6n test-recreate-deployment-5f94c574ff- deployment-8069 /api/v1/namespaces/deployment-8069/pods/test-recreate-deployment-5f94c574ff-2qc6n 77c4e046-dbd3-431f-affa-30a40663b461 8283736 0 2020-02-14 01:12:52 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff c1aa9fd5-ad43-4bca-be98-9748f2008f83 0xc0026a2c97 0xc0026a2c98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-qtstl,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-qtstl,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-qtstl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 01:12:52 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 01:12:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 01:12:52 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-02-14 01:12:52 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-02-14 01:12:52 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:12:52.698: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8069" for this suite.

• [SLOW TEST:9.251 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":280,"completed":196,"skipped":3202,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:12:52.706: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Feb 14 01:12:53.017: INFO: Waiting up to 5m0s for pod "pod-659ceff2-63ed-4b16-a6d3-33077a2d56ec" in namespace "emptydir-9807" to be "success or failure"
Feb 14 01:12:53.170: INFO: Pod "pod-659ceff2-63ed-4b16-a6d3-33077a2d56ec": Phase="Pending", Reason="", readiness=false. Elapsed: 152.931063ms
Feb 14 01:12:55.177: INFO: Pod "pod-659ceff2-63ed-4b16-a6d3-33077a2d56ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.160057552s
Feb 14 01:12:57.184: INFO: Pod "pod-659ceff2-63ed-4b16-a6d3-33077a2d56ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167178044s
Feb 14 01:12:59.242: INFO: Pod "pod-659ceff2-63ed-4b16-a6d3-33077a2d56ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.22528693s
Feb 14 01:13:01.248: INFO: Pod "pod-659ceff2-63ed-4b16-a6d3-33077a2d56ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.23067427s
Feb 14 01:13:03.283: INFO: Pod "pod-659ceff2-63ed-4b16-a6d3-33077a2d56ec": Phase="Pending", Reason="", readiness=false. Elapsed: 10.265735835s
Feb 14 01:13:05.345: INFO: Pod "pod-659ceff2-63ed-4b16-a6d3-33077a2d56ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.327662526s
STEP: Saw pod success
Feb 14 01:13:05.345: INFO: Pod "pod-659ceff2-63ed-4b16-a6d3-33077a2d56ec" satisfied condition "success or failure"
Feb 14 01:13:05.359: INFO: Trying to get logs from node jerma-node pod pod-659ceff2-63ed-4b16-a6d3-33077a2d56ec container test-container: 
STEP: delete the pod
Feb 14 01:13:05.502: INFO: Waiting for pod pod-659ceff2-63ed-4b16-a6d3-33077a2d56ec to disappear
Feb 14 01:13:05.509: INFO: Pod pod-659ceff2-63ed-4b16-a6d3-33077a2d56ec no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:13:05.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9807" for this suite.

• [SLOW TEST:12.818 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":197,"skipped":3204,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:13:05.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:53
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod test-webserver-1ff8728b-a663-42eb-8842-fd90c7beda5d in namespace container-probe-8635
Feb 14 01:13:13.728: INFO: Started pod test-webserver-1ff8728b-a663-42eb-8842-fd90c7beda5d in namespace container-probe-8635
STEP: checking the pod's current state and verifying that restartCount is present
Feb 14 01:13:13.735: INFO: Initial restart count of pod test-webserver-1ff8728b-a663-42eb-8842-fd90c7beda5d is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:17:15.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8635" for this suite.

• [SLOW TEST:249.712 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":280,"completed":198,"skipped":3206,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:17:15.241: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Feb 14 01:17:28.412: INFO: Successfully updated pod "pod-update-activedeadlineseconds-a238a2d7-b0ba-46a2-8e62-3df88dca7a14"
Feb 14 01:17:28.412: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-a238a2d7-b0ba-46a2-8e62-3df88dca7a14" in namespace "pods-1120" to be "terminated due to deadline exceeded"
Feb 14 01:17:28.418: INFO: Pod "pod-update-activedeadlineseconds-a238a2d7-b0ba-46a2-8e62-3df88dca7a14": Phase="Running", Reason="", readiness=true. Elapsed: 5.802673ms
Feb 14 01:17:30.426: INFO: Pod "pod-update-activedeadlineseconds-a238a2d7-b0ba-46a2-8e62-3df88dca7a14": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.014055052s
Feb 14 01:17:30.426: INFO: Pod "pod-update-activedeadlineseconds-a238a2d7-b0ba-46a2-8e62-3df88dca7a14" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:17:30.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1120" for this suite.

• [SLOW TEST:15.204 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":280,"completed":199,"skipped":3234,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:17:30.446: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-7804
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-7804
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-7804
Feb 14 01:17:30.648: INFO: Found 0 stateful pods, waiting for 1
Feb 14 01:17:40.658: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Feb 14 01:17:40.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7804 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 14 01:17:41.028: INFO: stderr: "I0214 01:17:40.815472    3435 log.go:172] (0xc00099ee70) (0xc000a86140) Create stream\nI0214 01:17:40.815714    3435 log.go:172] (0xc00099ee70) (0xc000a86140) Stream added, broadcasting: 1\nI0214 01:17:40.819306    3435 log.go:172] (0xc00099ee70) Reply frame received for 1\nI0214 01:17:40.819351    3435 log.go:172] (0xc00099ee70) (0xc000924000) Create stream\nI0214 01:17:40.819363    3435 log.go:172] (0xc00099ee70) (0xc000924000) Stream added, broadcasting: 3\nI0214 01:17:40.821502    3435 log.go:172] (0xc00099ee70) Reply frame received for 3\nI0214 01:17:40.821531    3435 log.go:172] (0xc00099ee70) (0xc000924280) Create stream\nI0214 01:17:40.821542    3435 log.go:172] (0xc00099ee70) (0xc000924280) Stream added, broadcasting: 5\nI0214 01:17:40.822652    3435 log.go:172] (0xc00099ee70) Reply frame received for 5\nI0214 01:17:40.922689    3435 log.go:172] (0xc00099ee70) Data frame received for 5\nI0214 01:17:40.922743    3435 log.go:172] (0xc000924280) (5) Data frame handling\nI0214 01:17:40.922767    3435 log.go:172] (0xc000924280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0214 01:17:40.952239    3435 log.go:172] (0xc00099ee70) Data frame received for 3\nI0214 01:17:40.952282    3435 log.go:172] (0xc000924000) (3) Data frame handling\nI0214 01:17:40.952325    3435 log.go:172] (0xc000924000) (3) Data frame sent\nI0214 01:17:41.018570    3435 log.go:172] (0xc00099ee70) Data frame received for 1\nI0214 01:17:41.018626    3435 log.go:172] (0xc000a86140) (1) Data frame handling\nI0214 01:17:41.018682    3435 log.go:172] (0xc000a86140) (1) Data frame sent\nI0214 01:17:41.018715    3435 log.go:172] (0xc00099ee70) (0xc000a86140) Stream removed, broadcasting: 1\nI0214 01:17:41.018766    3435 log.go:172] (0xc00099ee70) (0xc000924280) Stream removed, broadcasting: 5\nI0214 01:17:41.018808    3435 log.go:172] (0xc00099ee70) (0xc000924000) Stream removed, broadcasting: 3\nI0214 01:17:41.018861    3435 log.go:172] (0xc00099ee70) Go away received\nI0214 01:17:41.019349    3435 log.go:172] (0xc00099ee70) (0xc000a86140) Stream removed, broadcasting: 1\nI0214 01:17:41.019365    3435 log.go:172] (0xc00099ee70) (0xc000924000) Stream removed, broadcasting: 3\nI0214 01:17:41.019371    3435 log.go:172] (0xc00099ee70) (0xc000924280) Stream removed, broadcasting: 5\n"
Feb 14 01:17:41.028: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 14 01:17:41.028: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 14 01:17:41.034: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Feb 14 01:17:51.041: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 01:17:51.041: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 01:17:51.068: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999389s
Feb 14 01:17:52.085: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.990197933s
Feb 14 01:17:53.095: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.972415526s
Feb 14 01:17:54.105: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.963272534s
Feb 14 01:17:55.122: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.952388241s
Feb 14 01:17:56.131: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.936378804s
Feb 14 01:17:57.144: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.92728121s
Feb 14 01:17:58.152: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.91360722s
Feb 14 01:17:59.160: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.906153716s
Feb 14 01:18:00.168: INFO: Verifying statefulset ss doesn't scale past 1 for another 898.004783ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-7804
Feb 14 01:18:01.179: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7804 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 14 01:18:01.524: INFO: stderr: "I0214 01:18:01.345702    3452 log.go:172] (0xc000b6a000) (0xc000b60320) Create stream\nI0214 01:18:01.345906    3452 log.go:172] (0xc000b6a000) (0xc000b60320) Stream added, broadcasting: 1\nI0214 01:18:01.349164    3452 log.go:172] (0xc000b6a000) Reply frame received for 1\nI0214 01:18:01.349262    3452 log.go:172] (0xc000b6a000) (0xc000b603c0) Create stream\nI0214 01:18:01.349282    3452 log.go:172] (0xc000b6a000) (0xc000b603c0) Stream added, broadcasting: 3\nI0214 01:18:01.352236    3452 log.go:172] (0xc000b6a000) Reply frame received for 3\nI0214 01:18:01.352274    3452 log.go:172] (0xc000b6a000) (0xc000a86000) Create stream\nI0214 01:18:01.352289    3452 log.go:172] (0xc000b6a000) (0xc000a86000) Stream added, broadcasting: 5\nI0214 01:18:01.354074    3452 log.go:172] (0xc000b6a000) Reply frame received for 5\nI0214 01:18:01.443018    3452 log.go:172] (0xc000b6a000) Data frame received for 5\nI0214 01:18:01.443082    3452 log.go:172] (0xc000a86000) (5) Data frame handling\nI0214 01:18:01.443094    3452 log.go:172] (0xc000a86000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0214 01:18:01.443116    3452 log.go:172] (0xc000b6a000) Data frame received for 3\nI0214 01:18:01.443125    3452 log.go:172] (0xc000b603c0) (3) Data frame handling\nI0214 01:18:01.443136    3452 log.go:172] (0xc000b603c0) (3) Data frame sent\nI0214 01:18:01.512751    3452 log.go:172] (0xc000b6a000) Data frame received for 1\nI0214 01:18:01.512827    3452 log.go:172] (0xc000b60320) (1) Data frame handling\nI0214 01:18:01.512875    3452 log.go:172] (0xc000b60320) (1) Data frame sent\nI0214 01:18:01.512915    3452 log.go:172] (0xc000b6a000) (0xc000b60320) Stream removed, broadcasting: 1\nI0214 01:18:01.514226    3452 log.go:172] (0xc000b6a000) (0xc000b603c0) Stream removed, broadcasting: 3\nI0214 01:18:01.514294    3452 log.go:172] (0xc000b6a000) (0xc000a86000) Stream removed, broadcasting: 5\nI0214 01:18:01.514388    3452 log.go:172] (0xc000b6a000) (0xc000b60320) Stream removed, broadcasting: 1\nI0214 01:18:01.514408    3452 log.go:172] (0xc000b6a000) (0xc000b603c0) Stream removed, broadcasting: 3\nI0214 01:18:01.514436    3452 log.go:172] (0xc000b6a000) Go away received\nI0214 01:18:01.514510    3452 log.go:172] (0xc000b6a000) (0xc000a86000) Stream removed, broadcasting: 5\n"
Feb 14 01:18:01.524: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 14 01:18:01.524: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 14 01:18:01.553: INFO: Found 2 stateful pods, waiting for 3
Feb 14 01:18:11.566: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 01:18:11.566: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 01:18:11.566: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 14 01:18:21.561: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 01:18:21.561: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 01:18:21.561: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Feb 14 01:18:21.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7804 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 14 01:18:22.124: INFO: stderr: "I0214 01:18:21.887309    3476 log.go:172] (0xc000aeedc0) (0xc000b263c0) Create stream\nI0214 01:18:21.887399    3476 log.go:172] (0xc000aeedc0) (0xc000b263c0) Stream added, broadcasting: 1\nI0214 01:18:21.896485    3476 log.go:172] (0xc000aeedc0) Reply frame received for 1\nI0214 01:18:21.896567    3476 log.go:172] (0xc000aeedc0) (0xc000641b80) Create stream\nI0214 01:18:21.896584    3476 log.go:172] (0xc000aeedc0) (0xc000641b80) Stream added, broadcasting: 3\nI0214 01:18:21.898916    3476 log.go:172] (0xc000aeedc0) Reply frame received for 3\nI0214 01:18:21.898953    3476 log.go:172] (0xc000aeedc0) (0xc000606780) Create stream\nI0214 01:18:21.898963    3476 log.go:172] (0xc000aeedc0) (0xc000606780) Stream added, broadcasting: 5\nI0214 01:18:21.900754    3476 log.go:172] (0xc000aeedc0) Reply frame received for 5\nI0214 01:18:22.000482    3476 log.go:172] (0xc000aeedc0) Data frame received for 3\nI0214 01:18:22.001117    3476 log.go:172] (0xc000641b80) (3) Data frame handling\nI0214 01:18:22.001219    3476 log.go:172] (0xc000641b80) (3) Data frame sent\nI0214 01:18:22.001656    3476 log.go:172] (0xc000aeedc0) Data frame received for 5\nI0214 01:18:22.001675    3476 log.go:172] (0xc000606780) (5) Data frame handling\nI0214 01:18:22.001697    3476 log.go:172] (0xc000606780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0214 01:18:22.113442    3476 log.go:172] (0xc000aeedc0) Data frame received for 1\nI0214 01:18:22.113569    3476 log.go:172] (0xc000b263c0) (1) Data frame handling\nI0214 01:18:22.113599    3476 log.go:172] (0xc000b263c0) (1) Data frame sent\nI0214 01:18:22.113839    3476 log.go:172] (0xc000aeedc0) (0xc000b263c0) Stream removed, broadcasting: 1\nI0214 01:18:22.114320    3476 log.go:172] (0xc000aeedc0) (0xc000641b80) Stream removed, broadcasting: 3\nI0214 01:18:22.114630    3476 log.go:172] (0xc000aeedc0) (0xc000606780) Stream removed, broadcasting: 5\nI0214 01:18:22.114786    3476 log.go:172] (0xc000aeedc0) (0xc000b263c0) Stream removed, broadcasting: 1\nI0214 01:18:22.114801    3476 log.go:172] (0xc000aeedc0) (0xc000641b80) Stream removed, broadcasting: 3\nI0214 01:18:22.114808    3476 log.go:172] (0xc000aeedc0) (0xc000606780) Stream removed, broadcasting: 5\nI0214 01:18:22.114905    3476 log.go:172] (0xc000aeedc0) Go away received\n"
Feb 14 01:18:22.124: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 14 01:18:22.124: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 14 01:18:22.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7804 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 14 01:18:22.728: INFO: stderr: "I0214 01:18:22.422036    3500 log.go:172] (0xc000989340) (0xc000a3a320) Create stream\nI0214 01:18:22.422372    3500 log.go:172] (0xc000989340) (0xc000a3a320) Stream added, broadcasting: 1\nI0214 01:18:22.427394    3500 log.go:172] (0xc000989340) Reply frame received for 1\nI0214 01:18:22.427443    3500 log.go:172] (0xc000989340) (0xc000a3a3c0) Create stream\nI0214 01:18:22.427452    3500 log.go:172] (0xc000989340) (0xc000a3a3c0) Stream added, broadcasting: 3\nI0214 01:18:22.428251    3500 log.go:172] (0xc000989340) Reply frame received for 3\nI0214 01:18:22.428288    3500 log.go:172] (0xc000989340) (0xc000ae00a0) Create stream\nI0214 01:18:22.428300    3500 log.go:172] (0xc000989340) (0xc000ae00a0) Stream added, broadcasting: 5\nI0214 01:18:22.429327    3500 log.go:172] (0xc000989340) Reply frame received for 5\nI0214 01:18:22.545814    3500 log.go:172] (0xc000989340) Data frame received for 5\nI0214 01:18:22.546448    3500 log.go:172] (0xc000ae00a0) (5) Data frame handling\nI0214 01:18:22.546584    3500 log.go:172] (0xc000ae00a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0214 01:18:22.633546    3500 log.go:172] (0xc000989340) Data frame received for 3\nI0214 01:18:22.633599    3500 log.go:172] (0xc000a3a3c0) (3) Data frame handling\nI0214 01:18:22.633630    3500 log.go:172] (0xc000a3a3c0) (3) Data frame sent\nI0214 01:18:22.718431    3500 log.go:172] (0xc000989340) Data frame received for 1\nI0214 01:18:22.718540    3500 log.go:172] (0xc000989340) (0xc000a3a3c0) Stream removed, broadcasting: 3\nI0214 01:18:22.718670    3500 log.go:172] (0xc000a3a320) (1) Data frame handling\nI0214 01:18:22.718722    3500 log.go:172] (0xc000a3a320) (1) Data frame sent\nI0214 01:18:22.718751    3500 log.go:172] (0xc000989340) (0xc000a3a320) Stream removed, broadcasting: 1\nI0214 01:18:22.718971    3500 log.go:172] (0xc000989340) (0xc000ae00a0) Stream removed, broadcasting: 5\nI0214 01:18:22.719459    3500 log.go:172] (0xc000989340) Go away received\nI0214 01:18:22.719927    3500 log.go:172] (0xc000989340) (0xc000a3a320) Stream removed, broadcasting: 1\nI0214 01:18:22.719944    3500 log.go:172] (0xc000989340) (0xc000a3a3c0) Stream removed, broadcasting: 3\nI0214 01:18:22.719950    3500 log.go:172] (0xc000989340) (0xc000ae00a0) Stream removed, broadcasting: 5\n"
Feb 14 01:18:22.729: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 14 01:18:22.729: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 14 01:18:22.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7804 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Feb 14 01:18:25.473: INFO: stderr: "I0214 01:18:25.235169    3520 log.go:172] (0xc00094db80) (0xc000680780) Create stream\nI0214 01:18:25.235376    3520 log.go:172] (0xc00094db80) (0xc000680780) Stream added, broadcasting: 1\nI0214 01:18:25.248812    3520 log.go:172] (0xc00094db80) Reply frame received for 1\nI0214 01:18:25.248971    3520 log.go:172] (0xc00094db80) (0xc000763400) Create stream\nI0214 01:18:25.248986    3520 log.go:172] (0xc00094db80) (0xc000763400) Stream added, broadcasting: 3\nI0214 01:18:25.250483    3520 log.go:172] (0xc00094db80) Reply frame received for 3\nI0214 01:18:25.250527    3520 log.go:172] (0xc00094db80) (0xc0005de6e0) Create stream\nI0214 01:18:25.250541    3520 log.go:172] (0xc00094db80) (0xc0005de6e0) Stream added, broadcasting: 5\nI0214 01:18:25.253111    3520 log.go:172] (0xc00094db80) Reply frame received for 5\nI0214 01:18:25.345021    3520 log.go:172] (0xc00094db80) Data frame received for 5\nI0214 01:18:25.345151    3520 log.go:172] (0xc0005de6e0) (5) Data frame handling\nI0214 01:18:25.345229    3520 log.go:172] (0xc0005de6e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0214 01:18:25.368413    3520 log.go:172] (0xc00094db80) Data frame received for 3\nI0214 01:18:25.368487    3520 log.go:172] (0xc000763400) (3) Data frame handling\nI0214 01:18:25.368527    3520 log.go:172] (0xc000763400) (3) Data frame sent\nI0214 01:18:25.460354    3520 log.go:172] (0xc00094db80) (0xc000763400) Stream removed, broadcasting: 3\nI0214 01:18:25.460785    3520 log.go:172] (0xc00094db80) Data frame received for 1\nI0214 01:18:25.460873    3520 log.go:172] (0xc000680780) (1) Data frame handling\nI0214 01:18:25.460907    3520 log.go:172] (0xc000680780) (1) Data frame sent\nI0214 01:18:25.460948    3520 log.go:172] (0xc00094db80) (0xc000680780) Stream removed, broadcasting: 1\nI0214 01:18:25.461047    3520 log.go:172] (0xc00094db80) (0xc0005de6e0) Stream removed, broadcasting: 5\nI0214 01:18:25.461125    3520 log.go:172] (0xc00094db80) Go away received\nI0214 01:18:25.462178    3520 log.go:172] (0xc00094db80) (0xc000680780) Stream removed, broadcasting: 1\nI0214 01:18:25.462208    3520 log.go:172] (0xc00094db80) (0xc000763400) Stream removed, broadcasting: 3\nI0214 01:18:25.462222    3520 log.go:172] (0xc00094db80) (0xc0005de6e0) Stream removed, broadcasting: 5\n"
Feb 14 01:18:25.473: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Feb 14 01:18:25.473: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Feb 14 01:18:25.473: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 01:18:25.483: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 01:18:25.483: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 01:18:25.483: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Feb 14 01:18:25.509: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999623s
Feb 14 01:18:26.546: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.983121962s
Feb 14 01:18:27.558: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.94540796s
Feb 14 01:18:28.571: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.933411476s
Feb 14 01:18:29.584: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.919756485s
Feb 14 01:18:30.600: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.9079897s
Feb 14 01:18:31.611: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.891652639s
Feb 14 01:18:32.621: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.88062641s
Feb 14 01:18:33.648: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.870558297s
Feb 14 01:18:34.659: INFO: Verifying statefulset ss doesn't scale past 3 for another 843.783884ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-7804
Feb 14 01:18:35.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7804 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 14 01:18:36.169: INFO: stderr: "I0214 01:18:35.990481    3551 log.go:172] (0xc000c166e0) (0xc00063c8c0) Create stream\nI0214 01:18:35.990632    3551 log.go:172] (0xc000c166e0) (0xc00063c8c0) Stream added, broadcasting: 1\nI0214 01:18:36.001278    3551 log.go:172] (0xc000c166e0) Reply frame received for 1\nI0214 01:18:36.001451    3551 log.go:172] (0xc000c166e0) (0xc0006eb5e0) Create stream\nI0214 01:18:36.001466    3551 log.go:172] (0xc000c166e0) (0xc0006eb5e0) Stream added, broadcasting: 3\nI0214 01:18:36.004746    3551 log.go:172] (0xc000c166e0) Reply frame received for 3\nI0214 01:18:36.004896    3551 log.go:172] (0xc000c166e0) (0xc0002a8000) Create stream\nI0214 01:18:36.004920    3551 log.go:172] (0xc000c166e0) (0xc0002a8000) Stream added, broadcasting: 5\nI0214 01:18:36.007425    3551 log.go:172] (0xc000c166e0) Reply frame received for 5\nI0214 01:18:36.091864    3551 log.go:172] (0xc000c166e0) Data frame received for 5\nI0214 01:18:36.091936    3551 log.go:172] (0xc0002a8000) (5) Data frame handling\nI0214 01:18:36.091953    3551 log.go:172] (0xc0002a8000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0214 01:18:36.091990    3551 log.go:172] (0xc000c166e0) Data frame received for 3\nI0214 01:18:36.092001    3551 log.go:172] (0xc0006eb5e0) (3) Data frame handling\nI0214 01:18:36.092024    3551 log.go:172] (0xc0006eb5e0) (3) Data frame sent\nI0214 01:18:36.156869    3551 log.go:172] (0xc000c166e0) Data frame received for 1\nI0214 01:18:36.157328    3551 log.go:172] (0xc000c166e0) (0xc0006eb5e0) Stream removed, broadcasting: 3\nI0214 01:18:36.157408    3551 log.go:172] (0xc00063c8c0) (1) Data frame handling\nI0214 01:18:36.157438    3551 log.go:172] (0xc00063c8c0) (1) Data frame sent\nI0214 01:18:36.157464    3551 log.go:172] (0xc000c166e0) (0xc0002a8000) Stream removed, broadcasting: 5\nI0214 01:18:36.157588    3551 log.go:172] (0xc000c166e0) (0xc00063c8c0) Stream removed, broadcasting: 1\nI0214 01:18:36.157634    3551 log.go:172] (0xc000c166e0) Go away received\nI0214 01:18:36.158813    3551 log.go:172] (0xc000c166e0) (0xc00063c8c0) Stream removed, broadcasting: 1\nI0214 01:18:36.158841    3551 log.go:172] (0xc000c166e0) (0xc0006eb5e0) Stream removed, broadcasting: 3\nI0214 01:18:36.158857    3551 log.go:172] (0xc000c166e0) (0xc0002a8000) Stream removed, broadcasting: 5\n"
Feb 14 01:18:36.169: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 14 01:18:36.169: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 14 01:18:36.169: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7804 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 14 01:18:36.574: INFO: stderr: "I0214 01:18:36.330537    3573 log.go:172] (0xc000aa6000) (0xc000703d60) Create stream\nI0214 01:18:36.330718    3573 log.go:172] (0xc000aa6000) (0xc000703d60) Stream added, broadcasting: 1\nI0214 01:18:36.334589    3573 log.go:172] (0xc000aa6000) Reply frame received for 1\nI0214 01:18:36.334626    3573 log.go:172] (0xc000aa6000) (0xc000638960) Create stream\nI0214 01:18:36.334635    3573 log.go:172] (0xc000aa6000) (0xc000638960) Stream added, broadcasting: 3\nI0214 01:18:36.335804    3573 log.go:172] (0xc000aa6000) Reply frame received for 3\nI0214 01:18:36.335840    3573 log.go:172] (0xc000aa6000) (0xc00076d5e0) Create stream\nI0214 01:18:36.335850    3573 log.go:172] (0xc000aa6000) (0xc00076d5e0) Stream added, broadcasting: 5\nI0214 01:18:36.337916    3573 log.go:172] (0xc000aa6000) Reply frame received for 5\nI0214 01:18:36.426359    3573 log.go:172] (0xc000aa6000) Data frame received for 5\nI0214 01:18:36.426430    3573 log.go:172] (0xc00076d5e0) (5) Data frame handling\nI0214 01:18:36.426449    3573 log.go:172] (0xc00076d5e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0214 01:18:36.428035    3573 log.go:172] (0xc000aa6000) Data frame received for 3\nI0214 01:18:36.428067    3573 log.go:172] (0xc000638960) (3) Data frame handling\nI0214 01:18:36.428087    3573 log.go:172] (0xc000638960) (3) Data frame sent\nI0214 01:18:36.554371    3573 log.go:172] (0xc000aa6000) (0xc000638960) Stream removed, broadcasting: 3\nI0214 01:18:36.554640    3573 log.go:172] (0xc000aa6000) Data frame received for 1\nI0214 01:18:36.554684    3573 log.go:172] (0xc000703d60) (1) Data frame handling\nI0214 01:18:36.554724    3573 log.go:172] (0xc000703d60) (1) Data frame sent\nI0214 01:18:36.554897    3573 log.go:172] (0xc000aa6000) (0xc000703d60) Stream removed, broadcasting: 1\nI0214 01:18:36.555776    3573 log.go:172] (0xc000aa6000) (0xc00076d5e0) Stream removed, broadcasting: 5\nI0214 01:18:36.556372    3573 log.go:172] (0xc000aa6000) Go away received\nI0214 01:18:36.557001    3573 log.go:172] (0xc000aa6000) (0xc000703d60) Stream removed, broadcasting: 1\nI0214 01:18:36.557242    3573 log.go:172] (0xc000aa6000) (0xc000638960) Stream removed, broadcasting: 3\nI0214 01:18:36.557343    3573 log.go:172] (0xc000aa6000) (0xc00076d5e0) Stream removed, broadcasting: 5\n"
Feb 14 01:18:36.575: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 14 01:18:36.575: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 14 01:18:36.575: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-7804 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Feb 14 01:18:36.905: INFO: stderr: "I0214 01:18:36.735017    3595 log.go:172] (0xc0009a46e0) (0xc000bd6140) Create stream\nI0214 01:18:36.735187    3595 log.go:172] (0xc0009a46e0) (0xc000bd6140) Stream added, broadcasting: 1\nI0214 01:18:36.739818    3595 log.go:172] (0xc0009a46e0) Reply frame received for 1\nI0214 01:18:36.739952    3595 log.go:172] (0xc0009a46e0) (0xc0006bbcc0) Create stream\nI0214 01:18:36.740001    3595 log.go:172] (0xc0009a46e0) (0xc0006bbcc0) Stream added, broadcasting: 3\nI0214 01:18:36.744042    3595 log.go:172] (0xc0009a46e0) Reply frame received for 3\nI0214 01:18:36.744089    3595 log.go:172] (0xc0009a46e0) (0xc0006bbea0) Create stream\nI0214 01:18:36.744101    3595 log.go:172] (0xc0009a46e0) (0xc0006bbea0) Stream added, broadcasting: 5\nI0214 01:18:36.745229    3595 log.go:172] (0xc0009a46e0) Reply frame received for 5\nI0214 01:18:36.808805    3595 log.go:172] (0xc0009a46e0) Data frame received for 3\nI0214 01:18:36.808880    3595 log.go:172] (0xc0006bbcc0) (3) Data frame handling\nI0214 01:18:36.808908    3595 log.go:172] (0xc0006bbcc0) (3) Data frame sent\nI0214 01:18:36.808981    3595 log.go:172] (0xc0009a46e0) Data frame received for 5\nI0214 01:18:36.809013    3595 log.go:172] (0xc0006bbea0) (5) Data frame handling\nI0214 01:18:36.809062    3595 log.go:172] (0xc0006bbea0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0214 01:18:36.894827    3595 log.go:172] (0xc0009a46e0) Data frame received for 1\nI0214 01:18:36.895232    3595 log.go:172] (0xc0009a46e0) (0xc0006bbcc0) Stream removed, broadcasting: 3\nI0214 01:18:36.895337    3595 log.go:172] (0xc000bd6140) (1) Data frame handling\nI0214 01:18:36.895402    3595 log.go:172] (0xc000bd6140) (1) Data frame sent\nI0214 01:18:36.895432    3595 log.go:172] (0xc0009a46e0) (0xc0006bbea0) Stream removed, broadcasting: 5\nI0214 01:18:36.895459    3595 log.go:172] (0xc0009a46e0) (0xc000bd6140) Stream removed, broadcasting: 1\nI0214 01:18:36.895480    3595 log.go:172] (0xc0009a46e0) Go away received\nI0214 01:18:36.896303    3595 log.go:172] (0xc0009a46e0) (0xc000bd6140) Stream removed, broadcasting: 1\nI0214 01:18:36.896326    3595 log.go:172] (0xc0009a46e0) (0xc0006bbcc0) Stream removed, broadcasting: 3\nI0214 01:18:36.896335    3595 log.go:172] (0xc0009a46e0) (0xc0006bbea0) Stream removed, broadcasting: 5\n"
Feb 14 01:18:36.906: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Feb 14 01:18:36.906: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Feb 14 01:18:36.906: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 14 01:19:06.934: INFO: Deleting all statefulset in ns statefulset-7804
Feb 14 01:19:06.938: INFO: Scaling statefulset ss to 0
Feb 14 01:19:06.952: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 01:19:06.956: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:19:06.981: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7804" for this suite.

• [SLOW TEST:96.576 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":280,"completed":200,"skipped":3245,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:19:07.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-60f04220-bdf8-459c-8882-5a43dadcbd32
STEP: Creating a pod to test consume configMaps
Feb 14 01:19:07.174: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-90857db9-57dc-43e0-a742-a70211547c8f" in namespace "projected-441" to be "success or failure"
Feb 14 01:19:07.179: INFO: Pod "pod-projected-configmaps-90857db9-57dc-43e0-a742-a70211547c8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.722752ms
Feb 14 01:19:09.187: INFO: Pod "pod-projected-configmaps-90857db9-57dc-43e0-a742-a70211547c8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012083232s
Feb 14 01:19:11.197: INFO: Pod "pod-projected-configmaps-90857db9-57dc-43e0-a742-a70211547c8f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022121995s
Feb 14 01:19:13.204: INFO: Pod "pod-projected-configmaps-90857db9-57dc-43e0-a742-a70211547c8f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029711138s
Feb 14 01:19:15.214: INFO: Pod "pod-projected-configmaps-90857db9-57dc-43e0-a742-a70211547c8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.039553482s
STEP: Saw pod success
Feb 14 01:19:15.215: INFO: Pod "pod-projected-configmaps-90857db9-57dc-43e0-a742-a70211547c8f" satisfied condition "success or failure"
Feb 14 01:19:15.220: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-90857db9-57dc-43e0-a742-a70211547c8f container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 01:19:16.299: INFO: Waiting for pod pod-projected-configmaps-90857db9-57dc-43e0-a742-a70211547c8f to disappear
Feb 14 01:19:16.308: INFO: Pod pod-projected-configmaps-90857db9-57dc-43e0-a742-a70211547c8f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:19:16.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-441" for this suite.

• [SLOW TEST:9.305 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":280,"completed":201,"skipped":3256,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:19:16.332: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 14 01:19:16.531: INFO: Waiting up to 5m0s for pod "downwardapi-volume-08407274-f230-4d81-aaf0-904ea4142854" in namespace "downward-api-8375" to be "success or failure"
Feb 14 01:19:16.588: INFO: Pod "downwardapi-volume-08407274-f230-4d81-aaf0-904ea4142854": Phase="Pending", Reason="", readiness=false. Elapsed: 56.478969ms
Feb 14 01:19:18.618: INFO: Pod "downwardapi-volume-08407274-f230-4d81-aaf0-904ea4142854": Phase="Pending", Reason="", readiness=false. Elapsed: 2.086371756s
Feb 14 01:19:20.625: INFO: Pod "downwardapi-volume-08407274-f230-4d81-aaf0-904ea4142854": Phase="Pending", Reason="", readiness=false. Elapsed: 4.093367626s
Feb 14 01:19:22.638: INFO: Pod "downwardapi-volume-08407274-f230-4d81-aaf0-904ea4142854": Phase="Pending", Reason="", readiness=false. Elapsed: 6.106805763s
Feb 14 01:19:24.649: INFO: Pod "downwardapi-volume-08407274-f230-4d81-aaf0-904ea4142854": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.117758999s
STEP: Saw pod success
Feb 14 01:19:24.650: INFO: Pod "downwardapi-volume-08407274-f230-4d81-aaf0-904ea4142854" satisfied condition "success or failure"
Feb 14 01:19:24.657: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-08407274-f230-4d81-aaf0-904ea4142854 container client-container: 
STEP: delete the pod
Feb 14 01:19:24.855: INFO: Waiting for pod downwardapi-volume-08407274-f230-4d81-aaf0-904ea4142854 to disappear
Feb 14 01:19:24.914: INFO: Pod downwardapi-volume-08407274-f230-4d81-aaf0-904ea4142854 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:19:24.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8375" for this suite.

• [SLOW TEST:8.601 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":202,"skipped":3276,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:19:24.938: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 14 01:19:33.360: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:19:33.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-6164" for this suite.

• [SLOW TEST:8.487 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":280,"completed":203,"skipped":3303,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:19:33.427: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 14 01:19:33.560: INFO: Waiting up to 5m0s for pod "downwardapi-volume-50eca158-0236-455f-a8ec-1c19c899f67f" in namespace "projected-3668" to be "success or failure"
Feb 14 01:19:33.709: INFO: Pod "downwardapi-volume-50eca158-0236-455f-a8ec-1c19c899f67f": Phase="Pending", Reason="", readiness=false. Elapsed: 148.855078ms
Feb 14 01:19:35.725: INFO: Pod "downwardapi-volume-50eca158-0236-455f-a8ec-1c19c899f67f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.165246333s
Feb 14 01:19:37.741: INFO: Pod "downwardapi-volume-50eca158-0236-455f-a8ec-1c19c899f67f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.181604545s
Feb 14 01:19:39.752: INFO: Pod "downwardapi-volume-50eca158-0236-455f-a8ec-1c19c899f67f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.191738867s
Feb 14 01:19:41.762: INFO: Pod "downwardapi-volume-50eca158-0236-455f-a8ec-1c19c899f67f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.202643234s
STEP: Saw pod success
Feb 14 01:19:41.763: INFO: Pod "downwardapi-volume-50eca158-0236-455f-a8ec-1c19c899f67f" satisfied condition "success or failure"
Feb 14 01:19:41.767: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-50eca158-0236-455f-a8ec-1c19c899f67f container client-container: 
STEP: delete the pod
Feb 14 01:19:41.816: INFO: Waiting for pod downwardapi-volume-50eca158-0236-455f-a8ec-1c19c899f67f to disappear
Feb 14 01:19:41.887: INFO: Pod downwardapi-volume-50eca158-0236-455f-a8ec-1c19c899f67f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:19:41.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3668" for this suite.

• [SLOW TEST:8.476 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":280,"completed":204,"skipped":3315,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:19:41.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:19:42.091: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:19:50.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6136" for this suite.

• [SLOW TEST:8.279 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":280,"completed":205,"skipped":3318,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:19:50.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 14 01:19:51.426: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 14 01:19:53.499: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239991, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239991, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239991, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239991, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:19:55.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239991, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239991, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239991, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239991, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:19:57.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239991, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239991, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239991, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239991, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:19:59.508: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239991, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239991, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239991, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717239991, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 14 01:20:02.556: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:20:02.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-1350-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:20:03.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8460" for this suite.
STEP: Destroying namespace "webhook-8460-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.881 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":280,"completed":206,"skipped":3320,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:20:04.069: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Feb 14 01:20:10.620: INFO: 0 pods remaining
Feb 14 01:20:10.620: INFO: 0 pods has nil DeletionTimestamp
Feb 14 01:20:10.620: INFO: 
STEP: Gathering metrics
W0214 01:20:11.614393       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 01:20:11.614: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:20:11.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5938" for this suite.

• [SLOW TEST:8.035 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":280,"completed":207,"skipped":3350,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:20:12.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-93512fff-bd76-4335-8a97-70fa649010ee
STEP: Creating a pod to test consume secrets
Feb 14 01:20:12.971: INFO: Waiting up to 5m0s for pod "pod-secrets-7cf2fde3-a164-404c-94be-5bed3319b536" in namespace "secrets-3706" to be "success or failure"
Feb 14 01:20:12.979: INFO: Pod "pod-secrets-7cf2fde3-a164-404c-94be-5bed3319b536": Phase="Pending", Reason="", readiness=false. Elapsed: 8.04667ms
Feb 14 01:20:15.671: INFO: Pod "pod-secrets-7cf2fde3-a164-404c-94be-5bed3319b536": Phase="Pending", Reason="", readiness=false. Elapsed: 2.699813645s
Feb 14 01:20:17.831: INFO: Pod "pod-secrets-7cf2fde3-a164-404c-94be-5bed3319b536": Phase="Pending", Reason="", readiness=false. Elapsed: 4.859669789s
Feb 14 01:20:20.373: INFO: Pod "pod-secrets-7cf2fde3-a164-404c-94be-5bed3319b536": Phase="Pending", Reason="", readiness=false. Elapsed: 7.402028855s
Feb 14 01:20:22.506: INFO: Pod "pod-secrets-7cf2fde3-a164-404c-94be-5bed3319b536": Phase="Pending", Reason="", readiness=false. Elapsed: 9.535007806s
Feb 14 01:20:24.515: INFO: Pod "pod-secrets-7cf2fde3-a164-404c-94be-5bed3319b536": Phase="Pending", Reason="", readiness=false. Elapsed: 11.544064329s
Feb 14 01:20:26.529: INFO: Pod "pod-secrets-7cf2fde3-a164-404c-94be-5bed3319b536": Phase="Pending", Reason="", readiness=false. Elapsed: 13.557442366s
Feb 14 01:20:28.539: INFO: Pod "pod-secrets-7cf2fde3-a164-404c-94be-5bed3319b536": Phase="Pending", Reason="", readiness=false. Elapsed: 15.567807431s
Feb 14 01:20:30.548: INFO: Pod "pod-secrets-7cf2fde3-a164-404c-94be-5bed3319b536": Phase="Succeeded", Reason="", readiness=false. Elapsed: 17.576524621s
STEP: Saw pod success
Feb 14 01:20:30.548: INFO: Pod "pod-secrets-7cf2fde3-a164-404c-94be-5bed3319b536" satisfied condition "success or failure"
Feb 14 01:20:30.552: INFO: Trying to get logs from node jerma-node pod pod-secrets-7cf2fde3-a164-404c-94be-5bed3319b536 container secret-volume-test: 
STEP: delete the pod
Feb 14 01:20:30.701: INFO: Waiting for pod pod-secrets-7cf2fde3-a164-404c-94be-5bed3319b536 to disappear
Feb 14 01:20:30.718: INFO: Pod pod-secrets-7cf2fde3-a164-404c-94be-5bed3319b536 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:20:30.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3706" for this suite.

• [SLOW TEST:18.652 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":208,"skipped":3354,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:20:30.758: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 14 01:20:30.924: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d7a7fa6d-b7f0-46d5-a255-b34dd56dbb35" in namespace "projected-4613" to be "success or failure"
Feb 14 01:20:31.004: INFO: Pod "downwardapi-volume-d7a7fa6d-b7f0-46d5-a255-b34dd56dbb35": Phase="Pending", Reason="", readiness=false. Elapsed: 79.330019ms
Feb 14 01:20:33.032: INFO: Pod "downwardapi-volume-d7a7fa6d-b7f0-46d5-a255-b34dd56dbb35": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107501022s
Feb 14 01:20:35.038: INFO: Pod "downwardapi-volume-d7a7fa6d-b7f0-46d5-a255-b34dd56dbb35": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113641815s
Feb 14 01:20:37.062: INFO: Pod "downwardapi-volume-d7a7fa6d-b7f0-46d5-a255-b34dd56dbb35": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137512194s
Feb 14 01:20:39.074: INFO: Pod "downwardapi-volume-d7a7fa6d-b7f0-46d5-a255-b34dd56dbb35": Phase="Pending", Reason="", readiness=false. Elapsed: 8.149472284s
Feb 14 01:20:41.086: INFO: Pod "downwardapi-volume-d7a7fa6d-b7f0-46d5-a255-b34dd56dbb35": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.161862596s
STEP: Saw pod success
Feb 14 01:20:41.087: INFO: Pod "downwardapi-volume-d7a7fa6d-b7f0-46d5-a255-b34dd56dbb35" satisfied condition "success or failure"
Feb 14 01:20:41.091: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d7a7fa6d-b7f0-46d5-a255-b34dd56dbb35 container client-container: 
STEP: delete the pod
Feb 14 01:20:41.346: INFO: Waiting for pod downwardapi-volume-d7a7fa6d-b7f0-46d5-a255-b34dd56dbb35 to disappear
Feb 14 01:20:41.359: INFO: Pod downwardapi-volume-d7a7fa6d-b7f0-46d5-a255-b34dd56dbb35 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:20:41.359: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4613" for this suite.

• [SLOW TEST:10.620 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":280,"completed":209,"skipped":3355,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:20:41.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-04c150e3-bd54-454e-9480-14b10b384c28
STEP: Creating a pod to test consume secrets
Feb 14 01:20:41.509: INFO: Waiting up to 5m0s for pod "pod-secrets-2d964633-a70b-439e-8c35-2d171d4c9ad3" in namespace "secrets-7605" to be "success or failure"
Feb 14 01:20:41.515: INFO: Pod "pod-secrets-2d964633-a70b-439e-8c35-2d171d4c9ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.504996ms
Feb 14 01:20:43.592: INFO: Pod "pod-secrets-2d964633-a70b-439e-8c35-2d171d4c9ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082716998s
Feb 14 01:20:45.601: INFO: Pod "pod-secrets-2d964633-a70b-439e-8c35-2d171d4c9ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092046445s
Feb 14 01:20:47.610: INFO: Pod "pod-secrets-2d964633-a70b-439e-8c35-2d171d4c9ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.10080083s
Feb 14 01:20:49.618: INFO: Pod "pod-secrets-2d964633-a70b-439e-8c35-2d171d4c9ad3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.108481896s
STEP: Saw pod success
Feb 14 01:20:49.618: INFO: Pod "pod-secrets-2d964633-a70b-439e-8c35-2d171d4c9ad3" satisfied condition "success or failure"
Feb 14 01:20:49.621: INFO: Trying to get logs from node jerma-node pod pod-secrets-2d964633-a70b-439e-8c35-2d171d4c9ad3 container secret-volume-test: 
STEP: delete the pod
Feb 14 01:20:49.660: INFO: Waiting for pod pod-secrets-2d964633-a70b-439e-8c35-2d171d4c9ad3 to disappear
Feb 14 01:20:49.665: INFO: Pod pod-secrets-2d964633-a70b-439e-8c35-2d171d4c9ad3 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:20:49.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7605" for this suite.

• [SLOW TEST:8.296 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":210,"skipped":3355,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:20:49.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:20:49.814: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-0182c4f3-35ee-44b0-bdeb-f3f095a3e116" in namespace "security-context-test-4585" to be "success or failure"
Feb 14 01:20:49.836: INFO: Pod "alpine-nnp-false-0182c4f3-35ee-44b0-bdeb-f3f095a3e116": Phase="Pending", Reason="", readiness=false. Elapsed: 22.608492ms
Feb 14 01:20:51.868: INFO: Pod "alpine-nnp-false-0182c4f3-35ee-44b0-bdeb-f3f095a3e116": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054109888s
Feb 14 01:20:53.936: INFO: Pod "alpine-nnp-false-0182c4f3-35ee-44b0-bdeb-f3f095a3e116": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122162863s
Feb 14 01:20:55.988: INFO: Pod "alpine-nnp-false-0182c4f3-35ee-44b0-bdeb-f3f095a3e116": Phase="Pending", Reason="", readiness=false. Elapsed: 6.174591049s
Feb 14 01:20:58.329: INFO: Pod "alpine-nnp-false-0182c4f3-35ee-44b0-bdeb-f3f095a3e116": Phase="Pending", Reason="", readiness=false. Elapsed: 8.51528029s
Feb 14 01:21:00.336: INFO: Pod "alpine-nnp-false-0182c4f3-35ee-44b0-bdeb-f3f095a3e116": Phase="Pending", Reason="", readiness=false. Elapsed: 10.522495114s
Feb 14 01:21:02.345: INFO: Pod "alpine-nnp-false-0182c4f3-35ee-44b0-bdeb-f3f095a3e116": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.531052235s
Feb 14 01:21:02.345: INFO: Pod "alpine-nnp-false-0182c4f3-35ee-44b0-bdeb-f3f095a3e116" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:21:02.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-4585" for this suite.

• [SLOW TEST:12.698 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:291
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":211,"skipped":3359,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:21:02.376: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:21:02.504: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-cd9d27af-b436-4b6f-9a2e-b46a68d40f0a" in namespace "security-context-test-6427" to be "success or failure"
Feb 14 01:21:02.511: INFO: Pod "busybox-privileged-false-cd9d27af-b436-4b6f-9a2e-b46a68d40f0a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.434813ms
Feb 14 01:21:04.521: INFO: Pod "busybox-privileged-false-cd9d27af-b436-4b6f-9a2e-b46a68d40f0a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016943721s
Feb 14 01:21:06.534: INFO: Pod "busybox-privileged-false-cd9d27af-b436-4b6f-9a2e-b46a68d40f0a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029874515s
Feb 14 01:21:08.549: INFO: Pod "busybox-privileged-false-cd9d27af-b436-4b6f-9a2e-b46a68d40f0a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044649483s
Feb 14 01:21:10.561: INFO: Pod "busybox-privileged-false-cd9d27af-b436-4b6f-9a2e-b46a68d40f0a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.056465633s
Feb 14 01:21:12.581: INFO: Pod "busybox-privileged-false-cd9d27af-b436-4b6f-9a2e-b46a68d40f0a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.076548774s
Feb 14 01:21:12.581: INFO: Pod "busybox-privileged-false-cd9d27af-b436-4b6f-9a2e-b46a68d40f0a" satisfied condition "success or failure"
Feb 14 01:21:12.614: INFO: Got logs for pod "busybox-privileged-false-cd9d27af-b436-4b6f-9a2e-b46a68d40f0a": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:21:12.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-6427" for this suite.

• [SLOW TEST:10.258 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:227
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":212,"skipped":3370,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:21:12.636: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 14 01:21:12.835: INFO: Waiting up to 5m0s for pod "downwardapi-volume-792dc542-01b7-4f7e-b6ec-4df84f9d46d9" in namespace "projected-1128" to be "success or failure"
Feb 14 01:21:12.905: INFO: Pod "downwardapi-volume-792dc542-01b7-4f7e-b6ec-4df84f9d46d9": Phase="Pending", Reason="", readiness=false. Elapsed: 69.010226ms
Feb 14 01:21:14.914: INFO: Pod "downwardapi-volume-792dc542-01b7-4f7e-b6ec-4df84f9d46d9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078003816s
Feb 14 01:21:16.976: INFO: Pod "downwardapi-volume-792dc542-01b7-4f7e-b6ec-4df84f9d46d9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.140181648s
Feb 14 01:21:18.984: INFO: Pod "downwardapi-volume-792dc542-01b7-4f7e-b6ec-4df84f9d46d9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.14826013s
Feb 14 01:21:21.078: INFO: Pod "downwardapi-volume-792dc542-01b7-4f7e-b6ec-4df84f9d46d9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.241970608s
STEP: Saw pod success
Feb 14 01:21:21.078: INFO: Pod "downwardapi-volume-792dc542-01b7-4f7e-b6ec-4df84f9d46d9" satisfied condition "success or failure"
Feb 14 01:21:21.085: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-792dc542-01b7-4f7e-b6ec-4df84f9d46d9 container client-container: 
STEP: delete the pod
Feb 14 01:21:21.148: INFO: Waiting for pod downwardapi-volume-792dc542-01b7-4f7e-b6ec-4df84f9d46d9 to disappear
Feb 14 01:21:21.273: INFO: Pod downwardapi-volume-792dc542-01b7-4f7e-b6ec-4df84f9d46d9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:21:21.273: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1128" for this suite.

• [SLOW TEST:8.657 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":213,"skipped":3381,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:21:21.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:21:21.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Feb 14 01:21:21.523: INFO: stderr: ""
Feb 14 01:21:21.523: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.2.152+426b3538900329\", GitCommit:\"426b3538900329ed2ce5a0cb1cccf2f0ff32db60\", GitTreeState:\"clean\", BuildDate:\"2020-01-25T12:55:25Z\", GoVersion:\"go1.13.6\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:21:21.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1706" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":280,"completed":214,"skipped":3384,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:21:21.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 14 01:21:21.648: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d69f8717-f05e-4ea8-b802-9fd665877bd6" in namespace "downward-api-7595" to be "success or failure"
Feb 14 01:21:21.673: INFO: Pod "downwardapi-volume-d69f8717-f05e-4ea8-b802-9fd665877bd6": Phase="Pending", Reason="", readiness=false. Elapsed: 25.675712ms
Feb 14 01:21:23.694: INFO: Pod "downwardapi-volume-d69f8717-f05e-4ea8-b802-9fd665877bd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04629756s
Feb 14 01:21:25.703: INFO: Pod "downwardapi-volume-d69f8717-f05e-4ea8-b802-9fd665877bd6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054823428s
Feb 14 01:21:27.712: INFO: Pod "downwardapi-volume-d69f8717-f05e-4ea8-b802-9fd665877bd6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064218735s
Feb 14 01:21:30.236: INFO: Pod "downwardapi-volume-d69f8717-f05e-4ea8-b802-9fd665877bd6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.588326366s
Feb 14 01:21:32.248: INFO: Pod "downwardapi-volume-d69f8717-f05e-4ea8-b802-9fd665877bd6": Phase="Pending", Reason="", readiness=false. Elapsed: 10.600484575s
Feb 14 01:21:34.255: INFO: Pod "downwardapi-volume-d69f8717-f05e-4ea8-b802-9fd665877bd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.607566342s
STEP: Saw pod success
Feb 14 01:21:34.255: INFO: Pod "downwardapi-volume-d69f8717-f05e-4ea8-b802-9fd665877bd6" satisfied condition "success or failure"
Feb 14 01:21:34.260: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-d69f8717-f05e-4ea8-b802-9fd665877bd6 container client-container: 
STEP: delete the pod
Feb 14 01:21:34.421: INFO: Waiting for pod downwardapi-volume-d69f8717-f05e-4ea8-b802-9fd665877bd6 to disappear
Feb 14 01:21:34.442: INFO: Pod downwardapi-volume-d69f8717-f05e-4ea8-b802-9fd665877bd6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:21:34.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7595" for this suite.

• [SLOW TEST:12.918 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":280,"completed":215,"skipped":3446,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:21:34.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: set up a multi version CRD
Feb 14 01:21:34.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:21:57.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2932" for this suite.

• [SLOW TEST:22.741 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":280,"completed":216,"skipped":3449,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:21:57.203: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Feb 14 01:21:57.328: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2898'
Feb 14 01:21:57.774: INFO: stderr: ""
Feb 14 01:21:57.775: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 01:21:57.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2898'
Feb 14 01:21:57.967: INFO: stderr: ""
Feb 14 01:21:57.967: INFO: stdout: "update-demo-nautilus-4sz28 update-demo-nautilus-9qb96 "
Feb 14 01:21:57.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4sz28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2898'
Feb 14 01:21:58.071: INFO: stderr: ""
Feb 14 01:21:58.071: INFO: stdout: ""
Feb 14 01:21:58.071: INFO: update-demo-nautilus-4sz28 is created but not running
Feb 14 01:22:03.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2898'
Feb 14 01:22:03.165: INFO: stderr: ""
Feb 14 01:22:03.165: INFO: stdout: "update-demo-nautilus-4sz28 update-demo-nautilus-9qb96 "
Feb 14 01:22:03.165: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4sz28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2898'
Feb 14 01:22:04.890: INFO: stderr: ""
Feb 14 01:22:04.890: INFO: stdout: ""
Feb 14 01:22:04.891: INFO: update-demo-nautilus-4sz28 is created but not running
Feb 14 01:22:09.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2898'
Feb 14 01:22:10.098: INFO: stderr: ""
Feb 14 01:22:10.098: INFO: stdout: "update-demo-nautilus-4sz28 update-demo-nautilus-9qb96 "
Feb 14 01:22:10.099: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4sz28 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2898'
Feb 14 01:22:10.220: INFO: stderr: ""
Feb 14 01:22:10.220: INFO: stdout: "true"
Feb 14 01:22:10.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4sz28 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2898'
Feb 14 01:22:10.336: INFO: stderr: ""
Feb 14 01:22:10.336: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 01:22:10.336: INFO: validating pod update-demo-nautilus-4sz28
Feb 14 01:22:10.343: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 01:22:10.344: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 01:22:10.344: INFO: update-demo-nautilus-4sz28 is verified up and running
Feb 14 01:22:10.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9qb96 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2898'
Feb 14 01:22:10.483: INFO: stderr: ""
Feb 14 01:22:10.483: INFO: stdout: "true"
Feb 14 01:22:10.484: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9qb96 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2898'
Feb 14 01:22:10.581: INFO: stderr: ""
Feb 14 01:22:10.581: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 01:22:10.581: INFO: validating pod update-demo-nautilus-9qb96
Feb 14 01:22:10.588: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 01:22:10.588: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 01:22:10.588: INFO: update-demo-nautilus-9qb96 is verified up and running
STEP: using delete to clean up resources
Feb 14 01:22:10.589: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2898'
Feb 14 01:22:10.693: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 01:22:10.693: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Feb 14 01:22:10.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2898'
Feb 14 01:22:10.815: INFO: stderr: "No resources found in kubectl-2898 namespace.\n"
Feb 14 01:22:10.815: INFO: stdout: ""
Feb 14 01:22:10.815: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2898 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 14 01:22:10.897: INFO: stderr: ""
Feb 14 01:22:10.897: INFO: stdout: "update-demo-nautilus-4sz28\nupdate-demo-nautilus-9qb96\n"
Feb 14 01:22:11.398: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2898'
Feb 14 01:22:11.604: INFO: stderr: "No resources found in kubectl-2898 namespace.\n"
Feb 14 01:22:11.604: INFO: stdout: ""
Feb 14 01:22:11.605: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2898 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 14 01:22:11.779: INFO: stderr: ""
Feb 14 01:22:11.779: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:22:11.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2898" for this suite.

• [SLOW TEST:14.596 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":280,"completed":217,"skipped":3461,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:22:11.799: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:22:12.740: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-81767384-3765-4db2-a345-f39d56227c29" in namespace "security-context-test-8259" to be "success or failure"
Feb 14 01:22:12.780: INFO: Pod "busybox-readonly-false-81767384-3765-4db2-a345-f39d56227c29": Phase="Pending", Reason="", readiness=false. Elapsed: 39.189811ms
Feb 14 01:22:15.385: INFO: Pod "busybox-readonly-false-81767384-3765-4db2-a345-f39d56227c29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.644882541s
Feb 14 01:22:17.393: INFO: Pod "busybox-readonly-false-81767384-3765-4db2-a345-f39d56227c29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.652416431s
Feb 14 01:22:19.401: INFO: Pod "busybox-readonly-false-81767384-3765-4db2-a345-f39d56227c29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.660387963s
Feb 14 01:22:21.412: INFO: Pod "busybox-readonly-false-81767384-3765-4db2-a345-f39d56227c29": Phase="Pending", Reason="", readiness=false. Elapsed: 8.671854817s
Feb 14 01:22:23.422: INFO: Pod "busybox-readonly-false-81767384-3765-4db2-a345-f39d56227c29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.681643595s
Feb 14 01:22:23.422: INFO: Pod "busybox-readonly-false-81767384-3765-4db2-a345-f39d56227c29" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:22:23.423: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-8259" for this suite.

• [SLOW TEST:11.638 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:166
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":280,"completed":218,"skipped":3462,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:22:23.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:22:23.605: INFO: Creating ReplicaSet my-hostname-basic-cd5cb2a6-d622-4714-8e98-2106938d183a
Feb 14 01:22:23.629: INFO: Pod name my-hostname-basic-cd5cb2a6-d622-4714-8e98-2106938d183a: Found 0 pods out of 1
Feb 14 01:22:28.646: INFO: Pod name my-hostname-basic-cd5cb2a6-d622-4714-8e98-2106938d183a: Found 1 pods out of 1
Feb 14 01:22:28.646: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-cd5cb2a6-d622-4714-8e98-2106938d183a" is running
Feb 14 01:22:32.659: INFO: Pod "my-hostname-basic-cd5cb2a6-d622-4714-8e98-2106938d183a-hg8ts" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 01:22:24 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 01:22:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-cd5cb2a6-d622-4714-8e98-2106938d183a]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 01:22:24 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-cd5cb2a6-d622-4714-8e98-2106938d183a]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-02-14 01:22:23 +0000 UTC Reason: Message:}])
Feb 14 01:22:32.659: INFO: Trying to dial the pod
Feb 14 01:22:37.683: INFO: Controller my-hostname-basic-cd5cb2a6-d622-4714-8e98-2106938d183a: Got expected result from replica 1 [my-hostname-basic-cd5cb2a6-d622-4714-8e98-2106938d183a-hg8ts]: "my-hostname-basic-cd5cb2a6-d622-4714-8e98-2106938d183a-hg8ts", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:22:37.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-9165" for this suite.

• [SLOW TEST:14.258 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":280,"completed":219,"skipped":3463,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:22:37.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-projected-jl96
STEP: Creating a pod to test atomic-volume-subpath
Feb 14 01:22:38.024: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-jl96" in namespace "subpath-7344" to be "success or failure"
Feb 14 01:22:38.032: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.160591ms
Feb 14 01:22:40.830: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.805397144s
Feb 14 01:22:42.845: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Pending", Reason="", readiness=false. Elapsed: 4.82122146s
Feb 14 01:22:44.856: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Pending", Reason="", readiness=false. Elapsed: 6.832225328s
Feb 14 01:22:46.869: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Pending", Reason="", readiness=false. Elapsed: 8.844411678s
Feb 14 01:22:48.884: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Pending", Reason="", readiness=false. Elapsed: 10.85941737s
Feb 14 01:22:50.891: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Running", Reason="", readiness=true. Elapsed: 12.866740934s
Feb 14 01:22:52.903: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Running", Reason="", readiness=true. Elapsed: 14.878778836s
Feb 14 01:22:54.910: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Running", Reason="", readiness=true. Elapsed: 16.885466983s
Feb 14 01:22:56.919: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Running", Reason="", readiness=true. Elapsed: 18.894686274s
Feb 14 01:22:58.928: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Running", Reason="", readiness=true. Elapsed: 20.903823026s
Feb 14 01:23:00.942: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Running", Reason="", readiness=true. Elapsed: 22.917780897s
Feb 14 01:23:02.954: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Running", Reason="", readiness=true. Elapsed: 24.929912337s
Feb 14 01:23:04.964: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Running", Reason="", readiness=true. Elapsed: 26.939583656s
Feb 14 01:23:06.973: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Running", Reason="", readiness=true. Elapsed: 28.948709858s
Feb 14 01:23:08.986: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Running", Reason="", readiness=true. Elapsed: 30.961792009s
Feb 14 01:23:10.994: INFO: Pod "pod-subpath-test-projected-jl96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 32.970101492s
STEP: Saw pod success
Feb 14 01:23:10.994: INFO: Pod "pod-subpath-test-projected-jl96" satisfied condition "success or failure"
Feb 14 01:23:10.998: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-jl96 container test-container-subpath-projected-jl96: 
STEP: delete the pod
Feb 14 01:23:11.126: INFO: Waiting for pod pod-subpath-test-projected-jl96 to disappear
Feb 14 01:23:11.134: INFO: Pod pod-subpath-test-projected-jl96 no longer exists
STEP: Deleting pod pod-subpath-test-projected-jl96
Feb 14 01:23:11.134: INFO: Deleting pod "pod-subpath-test-projected-jl96" in namespace "subpath-7344"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:23:11.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7344" for this suite.

• [SLOW TEST:33.455 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":280,"completed":220,"skipped":3467,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:23:11.153: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1863
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 14 01:23:11.332: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-3658'
Feb 14 01:23:11.531: INFO: stderr: ""
Feb 14 01:23:11.531: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1868
Feb 14 01:23:11.653: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3658'
Feb 14 01:23:15.486: INFO: stderr: ""
Feb 14 01:23:15.486: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:23:15.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3658" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":280,"completed":221,"skipped":3470,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:23:15.525: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on tmpfs
Feb 14 01:23:15.601: INFO: Waiting up to 5m0s for pod "pod-9920bb95-0a13-4102-9a1e-0d767a7bc506" in namespace "emptydir-7624" to be "success or failure"
Feb 14 01:23:15.607: INFO: Pod "pod-9920bb95-0a13-4102-9a1e-0d767a7bc506": Phase="Pending", Reason="", readiness=false. Elapsed: 6.363679ms
Feb 14 01:23:17.616: INFO: Pod "pod-9920bb95-0a13-4102-9a1e-0d767a7bc506": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014873662s
Feb 14 01:23:19.627: INFO: Pod "pod-9920bb95-0a13-4102-9a1e-0d767a7bc506": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025723243s
Feb 14 01:23:21.635: INFO: Pod "pod-9920bb95-0a13-4102-9a1e-0d767a7bc506": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034283002s
Feb 14 01:23:23.645: INFO: Pod "pod-9920bb95-0a13-4102-9a1e-0d767a7bc506": Phase="Running", Reason="", readiness=true. Elapsed: 8.044082454s
Feb 14 01:23:25.654: INFO: Pod "pod-9920bb95-0a13-4102-9a1e-0d767a7bc506": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.053039192s
STEP: Saw pod success
Feb 14 01:23:25.654: INFO: Pod "pod-9920bb95-0a13-4102-9a1e-0d767a7bc506" satisfied condition "success or failure"
Feb 14 01:23:25.659: INFO: Trying to get logs from node jerma-node pod pod-9920bb95-0a13-4102-9a1e-0d767a7bc506 container test-container: 
STEP: delete the pod
Feb 14 01:23:25.767: INFO: Waiting for pod pod-9920bb95-0a13-4102-9a1e-0d767a7bc506 to disappear
Feb 14 01:23:25.781: INFO: Pod pod-9920bb95-0a13-4102-9a1e-0d767a7bc506 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:23:25.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7624" for this suite.

• [SLOW TEST:10.271 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":222,"skipped":3476,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:23:25.801: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:23:25.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-880" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":280,"completed":223,"skipped":3510,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:23:25.950: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 14 01:23:26.991: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 14 01:23:29.018: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240206, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240206, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240207, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240206, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:23:31.068: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240206, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240206, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240207, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240206, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:23:33.026: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240206, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240206, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240207, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240206, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 14 01:23:36.073: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:23:36.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3310" for this suite.
STEP: Destroying namespace "webhook-3310-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.380 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":280,"completed":224,"skipped":3516,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:23:36.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-5e0c4a35-ece7-42eb-a5d3-cd764bb75d99
STEP: Creating a pod to test consume configMaps
Feb 14 01:23:36.497: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e5ff5f86-f562-4d4a-a5cb-20e36333b991" in namespace "projected-9369" to be "success or failure"
Feb 14 01:23:36.519: INFO: Pod "pod-projected-configmaps-e5ff5f86-f562-4d4a-a5cb-20e36333b991": Phase="Pending", Reason="", readiness=false. Elapsed: 21.963082ms
Feb 14 01:23:38.533: INFO: Pod "pod-projected-configmaps-e5ff5f86-f562-4d4a-a5cb-20e36333b991": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035848005s
Feb 14 01:23:40.546: INFO: Pod "pod-projected-configmaps-e5ff5f86-f562-4d4a-a5cb-20e36333b991": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048596211s
Feb 14 01:23:42.558: INFO: Pod "pod-projected-configmaps-e5ff5f86-f562-4d4a-a5cb-20e36333b991": Phase="Pending", Reason="", readiness=false. Elapsed: 6.060955984s
Feb 14 01:23:44.572: INFO: Pod "pod-projected-configmaps-e5ff5f86-f562-4d4a-a5cb-20e36333b991": Phase="Pending", Reason="", readiness=false. Elapsed: 8.074230012s
Feb 14 01:23:46.584: INFO: Pod "pod-projected-configmaps-e5ff5f86-f562-4d4a-a5cb-20e36333b991": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08622543s
STEP: Saw pod success
Feb 14 01:23:46.584: INFO: Pod "pod-projected-configmaps-e5ff5f86-f562-4d4a-a5cb-20e36333b991" satisfied condition "success or failure"
Feb 14 01:23:46.589: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-e5ff5f86-f562-4d4a-a5cb-20e36333b991 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 01:23:46.662: INFO: Waiting for pod pod-projected-configmaps-e5ff5f86-f562-4d4a-a5cb-20e36333b991 to disappear
Feb 14 01:23:46.708: INFO: Pod pod-projected-configmaps-e5ff5f86-f562-4d4a-a5cb-20e36333b991 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:23:46.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9369" for this suite.

• [SLOW TEST:10.391 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":280,"completed":225,"skipped":3522,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:23:46.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb 14 01:23:47.545: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb 14 01:23:49.564: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240227, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240227, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240227, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240227, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:23:51.654: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240227, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240227, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240227, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240227, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:23:53.579: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240227, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240227, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240227, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240227, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:23:55.571: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240227, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240227, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240227, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240227, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 14 01:23:58.722: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:23:58.794: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:24:02.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-254" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:15.850 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":280,"completed":226,"skipped":3554,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:24:02.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on tmpfs
Feb 14 01:24:02.757: INFO: Waiting up to 5m0s for pod "pod-edd57073-b61d-40e1-8e76-35af3e49699f" in namespace "emptydir-9210" to be "success or failure"
Feb 14 01:24:02.886: INFO: Pod "pod-edd57073-b61d-40e1-8e76-35af3e49699f": Phase="Pending", Reason="", readiness=false. Elapsed: 129.046193ms
Feb 14 01:24:04.894: INFO: Pod "pod-edd57073-b61d-40e1-8e76-35af3e49699f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.137297349s
Feb 14 01:24:06.902: INFO: Pod "pod-edd57073-b61d-40e1-8e76-35af3e49699f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14534481s
Feb 14 01:24:08.922: INFO: Pod "pod-edd57073-b61d-40e1-8e76-35af3e49699f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.16449736s
Feb 14 01:24:10.930: INFO: Pod "pod-edd57073-b61d-40e1-8e76-35af3e49699f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.17253375s
Feb 14 01:24:12.938: INFO: Pod "pod-edd57073-b61d-40e1-8e76-35af3e49699f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.181371448s
STEP: Saw pod success
Feb 14 01:24:12.939: INFO: Pod "pod-edd57073-b61d-40e1-8e76-35af3e49699f" satisfied condition "success or failure"
Feb 14 01:24:12.948: INFO: Trying to get logs from node jerma-node pod pod-edd57073-b61d-40e1-8e76-35af3e49699f container test-container: 
STEP: delete the pod
Feb 14 01:24:13.004: INFO: Waiting for pod pod-edd57073-b61d-40e1-8e76-35af3e49699f to disappear
Feb 14 01:24:13.008: INFO: Pod pod-edd57073-b61d-40e1-8e76-35af3e49699f no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:24:13.009: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9210" for this suite.

• [SLOW TEST:10.448 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":227,"skipped":3558,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:24:13.027: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-f0ab7252-0dac-4f9b-9283-7b6e8566375d
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:24:23.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2956" for this suite.

• [SLOW TEST:10.219 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":228,"skipped":3581,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:24:23.248: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 14 01:24:23.342: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a2704af5-fbbe-49b9-88f5-f56315aaa5f4" in namespace "projected-9499" to be "success or failure"
Feb 14 01:24:23.348: INFO: Pod "downwardapi-volume-a2704af5-fbbe-49b9-88f5-f56315aaa5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 5.8727ms
Feb 14 01:24:25.356: INFO: Pod "downwardapi-volume-a2704af5-fbbe-49b9-88f5-f56315aaa5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01340233s
Feb 14 01:24:27.364: INFO: Pod "downwardapi-volume-a2704af5-fbbe-49b9-88f5-f56315aaa5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.021522447s
Feb 14 01:24:29.372: INFO: Pod "downwardapi-volume-a2704af5-fbbe-49b9-88f5-f56315aaa5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029741166s
Feb 14 01:24:31.381: INFO: Pod "downwardapi-volume-a2704af5-fbbe-49b9-88f5-f56315aaa5f4": Phase="Pending", Reason="", readiness=false. Elapsed: 8.03848668s
Feb 14 01:24:33.388: INFO: Pod "downwardapi-volume-a2704af5-fbbe-49b9-88f5-f56315aaa5f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.045856455s
STEP: Saw pod success
Feb 14 01:24:33.388: INFO: Pod "downwardapi-volume-a2704af5-fbbe-49b9-88f5-f56315aaa5f4" satisfied condition "success or failure"
Feb 14 01:24:33.394: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a2704af5-fbbe-49b9-88f5-f56315aaa5f4 container client-container: 
STEP: delete the pod
Feb 14 01:24:33.432: INFO: Waiting for pod downwardapi-volume-a2704af5-fbbe-49b9-88f5-f56315aaa5f4 to disappear
Feb 14 01:24:33.456: INFO: Pod downwardapi-volume-a2704af5-fbbe-49b9-88f5-f56315aaa5f4 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:24:33.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9499" for this suite.

• [SLOW TEST:10.223 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":229,"skipped":3594,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:24:33.473: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 14 01:24:34.300: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 14 01:24:36.320: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240274, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240274, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240274, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240274, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:24:38.327: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240274, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240274, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240274, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240274, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:24:40.329: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240274, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240274, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240274, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240274, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:24:42.328: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240274, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240274, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240274, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240274, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 14 01:24:45.363: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:24:55.690: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8095" for this suite.
STEP: Destroying namespace "webhook-8095-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:22.364 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":280,"completed":230,"skipped":3609,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:24:55.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 14 01:25:07.419: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:25:07.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5231" for this suite.

• [SLOW TEST:11.697 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":231,"skipped":3613,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:25:07.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting the proxy server
Feb 14 01:25:07.602: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:25:07.683: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3289" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":280,"completed":232,"skipped":3638,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:25:07.700: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:25:07.869: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:25:16.923: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3347" for this suite.

• [SLOW TEST:9.251 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":280,"completed":233,"skipped":3647,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:25:16.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 14 01:25:17.074: INFO: Waiting up to 5m0s for pod "downwardapi-volume-16a0b45d-7a97-42ad-9d53-0f0afec3b95b" in namespace "downward-api-4509" to be "success or failure"
Feb 14 01:25:17.079: INFO: Pod "downwardapi-volume-16a0b45d-7a97-42ad-9d53-0f0afec3b95b": Phase="Pending", Reason="", readiness=false. Elapsed: 5.319971ms
Feb 14 01:25:19.088: INFO: Pod "downwardapi-volume-16a0b45d-7a97-42ad-9d53-0f0afec3b95b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014329015s
Feb 14 01:25:21.096: INFO: Pod "downwardapi-volume-16a0b45d-7a97-42ad-9d53-0f0afec3b95b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022097377s
Feb 14 01:25:23.141: INFO: Pod "downwardapi-volume-16a0b45d-7a97-42ad-9d53-0f0afec3b95b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.066634506s
Feb 14 01:25:25.150: INFO: Pod "downwardapi-volume-16a0b45d-7a97-42ad-9d53-0f0afec3b95b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075883104s
STEP: Saw pod success
Feb 14 01:25:25.150: INFO: Pod "downwardapi-volume-16a0b45d-7a97-42ad-9d53-0f0afec3b95b" satisfied condition "success or failure"
Feb 14 01:25:25.155: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-16a0b45d-7a97-42ad-9d53-0f0afec3b95b container client-container: 
STEP: delete the pod
Feb 14 01:25:25.259: INFO: Waiting for pod downwardapi-volume-16a0b45d-7a97-42ad-9d53-0f0afec3b95b to disappear
Feb 14 01:25:25.271: INFO: Pod downwardapi-volume-16a0b45d-7a97-42ad-9d53-0f0afec3b95b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:25:25.271: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4509" for this suite.

• [SLOW TEST:8.341 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":234,"skipped":3663,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:25:25.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Feb 14 01:25:25.482: INFO: Waiting up to 5m0s for pod "downward-api-8b871fbe-3622-4563-a839-ef924ac07c51" in namespace "downward-api-4211" to be "success or failure"
Feb 14 01:25:25.491: INFO: Pod "downward-api-8b871fbe-3622-4563-a839-ef924ac07c51": Phase="Pending", Reason="", readiness=false. Elapsed: 8.511967ms
Feb 14 01:25:27.499: INFO: Pod "downward-api-8b871fbe-3622-4563-a839-ef924ac07c51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016617964s
Feb 14 01:25:29.551: INFO: Pod "downward-api-8b871fbe-3622-4563-a839-ef924ac07c51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069272727s
Feb 14 01:25:31.561: INFO: Pod "downward-api-8b871fbe-3622-4563-a839-ef924ac07c51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.078561185s
Feb 14 01:25:33.686: INFO: Pod "downward-api-8b871fbe-3622-4563-a839-ef924ac07c51": Phase="Pending", Reason="", readiness=false. Elapsed: 8.204251281s
Feb 14 01:25:35.694: INFO: Pod "downward-api-8b871fbe-3622-4563-a839-ef924ac07c51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.212295386s
STEP: Saw pod success
Feb 14 01:25:35.695: INFO: Pod "downward-api-8b871fbe-3622-4563-a839-ef924ac07c51" satisfied condition "success or failure"
Feb 14 01:25:35.699: INFO: Trying to get logs from node jerma-node pod downward-api-8b871fbe-3622-4563-a839-ef924ac07c51 container dapi-container: 
STEP: delete the pod
Feb 14 01:25:35.750: INFO: Waiting for pod downward-api-8b871fbe-3622-4563-a839-ef924ac07c51 to disappear
Feb 14 01:25:35.762: INFO: Pod downward-api-8b871fbe-3622-4563-a839-ef924ac07c51 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:25:35.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4211" for this suite.

• [SLOW TEST:10.498 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:34
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":280,"completed":235,"skipped":3679,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:25:35.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Feb 14 01:25:43.138: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:25:43.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9412" for this suite.

• [SLOW TEST:7.383 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":280,"completed":236,"skipped":3694,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:25:43.187: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:25:43.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4491" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":280,"completed":237,"skipped":3707,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}

------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:25:43.347: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:25:43.523: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:25:44.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4585" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":280,"completed":238,"skipped":3707,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:25:44.609: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0214 01:26:26.900278       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 01:26:26.900: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:26:26.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3925" for this suite.

• [SLOW TEST:42.311 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":280,"completed":239,"skipped":3716,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:26:26.921: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating pod
Feb 14 01:26:51.108: INFO: Pod pod-hostip-5891f7c4-d59c-421f-9691-b3f8b9ab4d26 has hostIP: 10.96.2.250
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:26:51.108: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7069" for this suite.

• [SLOW TEST:24.197 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":280,"completed":240,"skipped":3718,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:26:51.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-secret-n7q9
STEP: Creating a pod to test atomic-volume-subpath
Feb 14 01:26:51.356: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-n7q9" in namespace "subpath-1228" to be "success or failure"
Feb 14 01:26:51.363: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.957711ms
Feb 14 01:26:53.374: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017271575s
Feb 14 01:26:56.504: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Pending", Reason="", readiness=false. Elapsed: 5.147526192s
Feb 14 01:26:58.514: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Pending", Reason="", readiness=false. Elapsed: 7.157418502s
Feb 14 01:27:00.525: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.16854416s
Feb 14 01:27:02.540: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Pending", Reason="", readiness=false. Elapsed: 11.184035391s
Feb 14 01:27:04.555: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Running", Reason="", readiness=true. Elapsed: 13.198157823s
Feb 14 01:27:06.570: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Running", Reason="", readiness=true. Elapsed: 15.213279494s
Feb 14 01:27:08.584: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Running", Reason="", readiness=true. Elapsed: 17.227214232s
Feb 14 01:27:10.605: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Running", Reason="", readiness=true. Elapsed: 19.248141333s
Feb 14 01:27:12.627: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Running", Reason="", readiness=true. Elapsed: 21.270098102s
Feb 14 01:27:14.635: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Running", Reason="", readiness=true. Elapsed: 23.279013948s
Feb 14 01:27:16.647: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Running", Reason="", readiness=true. Elapsed: 25.290731263s
Feb 14 01:27:18.655: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Running", Reason="", readiness=true. Elapsed: 27.298576514s
Feb 14 01:27:20.665: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Running", Reason="", readiness=true. Elapsed: 29.309052595s
Feb 14 01:27:22.674: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Running", Reason="", readiness=true. Elapsed: 31.31773597s
Feb 14 01:27:24.685: INFO: Pod "pod-subpath-test-secret-n7q9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 33.328215394s
STEP: Saw pod success
Feb 14 01:27:24.685: INFO: Pod "pod-subpath-test-secret-n7q9" satisfied condition "success or failure"
Feb 14 01:27:24.691: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-n7q9 container test-container-subpath-secret-n7q9: 
STEP: delete the pod
Feb 14 01:27:24.800: INFO: Waiting for pod pod-subpath-test-secret-n7q9 to disappear
Feb 14 01:27:24.813: INFO: Pod pod-subpath-test-secret-n7q9 no longer exists
STEP: Deleting pod pod-subpath-test-secret-n7q9
Feb 14 01:27:24.814: INFO: Deleting pod "pod-subpath-test-secret-n7q9" in namespace "subpath-1228"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:27:24.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1228" for this suite.

• [SLOW TEST:33.711 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":280,"completed":241,"skipped":3744,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:27:24.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-6512a4d9-f19d-471a-879b-924ff7534de2
STEP: Creating a pod to test consume configMaps
Feb 14 01:27:24.947: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-077eb12e-acc7-436b-9595-62640fd0488e" in namespace "projected-3439" to be "success or failure"
Feb 14 01:27:24.969: INFO: Pod "pod-projected-configmaps-077eb12e-acc7-436b-9595-62640fd0488e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.884425ms
Feb 14 01:27:26.979: INFO: Pod "pod-projected-configmaps-077eb12e-acc7-436b-9595-62640fd0488e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031929928s
Feb 14 01:27:28.989: INFO: Pod "pod-projected-configmaps-077eb12e-acc7-436b-9595-62640fd0488e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.04188264s
Feb 14 01:27:30.996: INFO: Pod "pod-projected-configmaps-077eb12e-acc7-436b-9595-62640fd0488e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048944472s
Feb 14 01:27:33.003: INFO: Pod "pod-projected-configmaps-077eb12e-acc7-436b-9595-62640fd0488e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.056085311s
STEP: Saw pod success
Feb 14 01:27:33.004: INFO: Pod "pod-projected-configmaps-077eb12e-acc7-436b-9595-62640fd0488e" satisfied condition "success or failure"
Feb 14 01:27:33.008: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-077eb12e-acc7-436b-9595-62640fd0488e container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 01:27:33.081: INFO: Waiting for pod pod-projected-configmaps-077eb12e-acc7-436b-9595-62640fd0488e to disappear
Feb 14 01:27:33.088: INFO: Pod pod-projected-configmaps-077eb12e-acc7-436b-9595-62640fd0488e no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:27:33.088: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3439" for this suite.

• [SLOW TEST:8.270 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":242,"skipped":3749,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:27:33.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-5074
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-5074
STEP: creating replication controller externalsvc in namespace services-5074
I0214 01:27:36.822149       9 runners.go:189] Created replication controller with name: externalsvc, namespace: services-5074, replica count: 2
I0214 01:27:39.875755       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 01:27:42.876825       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 01:27:45.878111       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0214 01:27:48.878899       9 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Feb 14 01:27:48.924: INFO: Creating new exec pod
Feb 14 01:27:56.970: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5074 execpodrhzkj -- /bin/sh -x -c nslookup clusterip-service'
Feb 14 01:27:57.393: INFO: stderr: "I0214 01:27:57.171311    3988 log.go:172] (0xc000a3a0b0) (0xc0006f0140) Create stream\nI0214 01:27:57.171599    3988 log.go:172] (0xc000a3a0b0) (0xc0006f0140) Stream added, broadcasting: 1\nI0214 01:27:57.179718    3988 log.go:172] (0xc000a3a0b0) Reply frame received for 1\nI0214 01:27:57.179951    3988 log.go:172] (0xc000a3a0b0) (0xc0006f01e0) Create stream\nI0214 01:27:57.179977    3988 log.go:172] (0xc000a3a0b0) (0xc0006f01e0) Stream added, broadcasting: 3\nI0214 01:27:57.181587    3988 log.go:172] (0xc000a3a0b0) Reply frame received for 3\nI0214 01:27:57.181611    3988 log.go:172] (0xc000a3a0b0) (0xc0006f0280) Create stream\nI0214 01:27:57.181622    3988 log.go:172] (0xc000a3a0b0) (0xc0006f0280) Stream added, broadcasting: 5\nI0214 01:27:57.182758    3988 log.go:172] (0xc000a3a0b0) Reply frame received for 5\nI0214 01:27:57.290366    3988 log.go:172] (0xc000a3a0b0) Data frame received for 5\nI0214 01:27:57.290442    3988 log.go:172] (0xc0006f0280) (5) Data frame handling\nI0214 01:27:57.290467    3988 log.go:172] (0xc0006f0280) (5) Data frame sent\n+ nslookup clusterip-service\nI0214 01:27:57.313628    3988 log.go:172] (0xc000a3a0b0) Data frame received for 3\nI0214 01:27:57.313727    3988 log.go:172] (0xc0006f01e0) (3) Data frame handling\nI0214 01:27:57.313756    3988 log.go:172] (0xc0006f01e0) (3) Data frame sent\nI0214 01:27:57.316994    3988 log.go:172] (0xc000a3a0b0) Data frame received for 3\nI0214 01:27:57.317005    3988 log.go:172] (0xc0006f01e0) (3) Data frame handling\nI0214 01:27:57.317012    3988 log.go:172] (0xc0006f01e0) (3) Data frame sent\nI0214 01:27:57.386049    3988 log.go:172] (0xc000a3a0b0) Data frame received for 1\nI0214 01:27:57.386339    3988 log.go:172] (0xc000a3a0b0) (0xc0006f01e0) Stream removed, broadcasting: 3\nI0214 01:27:57.386390    3988 log.go:172] (0xc0006f0140) (1) Data frame handling\nI0214 01:27:57.386409    3988 log.go:172] (0xc0006f0140) (1) Data frame sent\nI0214 01:27:57.386463    3988 log.go:172] (0xc000a3a0b0) (0xc0006f0280) Stream removed, broadcasting: 5\nI0214 01:27:57.386490    3988 log.go:172] (0xc000a3a0b0) (0xc0006f0140) Stream removed, broadcasting: 1\nI0214 01:27:57.386501    3988 log.go:172] (0xc000a3a0b0) Go away received\nI0214 01:27:57.387707    3988 log.go:172] (0xc000a3a0b0) (0xc0006f0140) Stream removed, broadcasting: 1\nI0214 01:27:57.387719    3988 log.go:172] (0xc000a3a0b0) (0xc0006f01e0) Stream removed, broadcasting: 3\nI0214 01:27:57.387724    3988 log.go:172] (0xc000a3a0b0) (0xc0006f0280) Stream removed, broadcasting: 5\n"
Feb 14 01:27:57.393: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-5074.svc.cluster.local\tcanonical name = externalsvc.services-5074.svc.cluster.local.\nName:\texternalsvc.services-5074.svc.cluster.local\nAddress: 10.96.104.47\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-5074, will wait for the garbage collector to delete the pods
Feb 14 01:27:57.453: INFO: Deleting ReplicationController externalsvc took: 5.424392ms
Feb 14 01:27:57.854: INFO: Terminating ReplicationController externalsvc pods took: 400.548093ms
Feb 14 01:28:13.240: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:28:13.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5074" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:695

• [SLOW TEST:40.245 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":280,"completed":243,"skipped":3758,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:28:13.349: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating secret secrets-6373/secret-test-49fba48b-86ff-4a92-b150-8061691255dc
STEP: Creating a pod to test consume secrets
Feb 14 01:28:13.473: INFO: Waiting up to 5m0s for pod "pod-configmaps-5a7210e2-d88a-4efe-9e1d-55a1e5a5335c" in namespace "secrets-6373" to be "success or failure"
Feb 14 01:28:13.478: INFO: Pod "pod-configmaps-5a7210e2-d88a-4efe-9e1d-55a1e5a5335c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.454734ms
Feb 14 01:28:15.487: INFO: Pod "pod-configmaps-5a7210e2-d88a-4efe-9e1d-55a1e5a5335c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014222369s
Feb 14 01:28:17.495: INFO: Pod "pod-configmaps-5a7210e2-d88a-4efe-9e1d-55a1e5a5335c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02217761s
Feb 14 01:28:19.506: INFO: Pod "pod-configmaps-5a7210e2-d88a-4efe-9e1d-55a1e5a5335c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033126266s
Feb 14 01:28:21.517: INFO: Pod "pod-configmaps-5a7210e2-d88a-4efe-9e1d-55a1e5a5335c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.043867507s
Feb 14 01:28:23.525: INFO: Pod "pod-configmaps-5a7210e2-d88a-4efe-9e1d-55a1e5a5335c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.052288432s
STEP: Saw pod success
Feb 14 01:28:23.526: INFO: Pod "pod-configmaps-5a7210e2-d88a-4efe-9e1d-55a1e5a5335c" satisfied condition "success or failure"
Feb 14 01:28:23.533: INFO: Trying to get logs from node jerma-node pod pod-configmaps-5a7210e2-d88a-4efe-9e1d-55a1e5a5335c container env-test: 
STEP: delete the pod
Feb 14 01:28:23.654: INFO: Waiting for pod pod-configmaps-5a7210e2-d88a-4efe-9e1d-55a1e5a5335c to disappear
Feb 14 01:28:23.753: INFO: Pod pod-configmaps-5a7210e2-d88a-4efe-9e1d-55a1e5a5335c no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:28:23.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6373" for this suite.

• [SLOW TEST:10.456 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:34
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":280,"completed":244,"skipped":3778,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:28:23.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:28:41.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6300" for this suite.

• [SLOW TEST:18.052 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":280,"completed":245,"skipped":3804,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:28:41.869: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name projected-secret-test-b745f110-7bc1-4502-946e-0c79c6aeb694
STEP: Creating a pod to test consume secrets
Feb 14 01:28:42.058: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9d5e6812-09fc-49df-a826-30b6b50a5a4c" in namespace "projected-5663" to be "success or failure"
Feb 14 01:28:42.080: INFO: Pod "pod-projected-secrets-9d5e6812-09fc-49df-a826-30b6b50a5a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 22.256169ms
Feb 14 01:28:44.085: INFO: Pod "pod-projected-secrets-9d5e6812-09fc-49df-a826-30b6b50a5a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027169094s
Feb 14 01:28:46.091: INFO: Pod "pod-projected-secrets-9d5e6812-09fc-49df-a826-30b6b50a5a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033018062s
Feb 14 01:28:50.439: INFO: Pod "pod-projected-secrets-9d5e6812-09fc-49df-a826-30b6b50a5a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.381434515s
Feb 14 01:28:52.448: INFO: Pod "pod-projected-secrets-9d5e6812-09fc-49df-a826-30b6b50a5a4c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.390584115s
Feb 14 01:28:54.461: INFO: Pod "pod-projected-secrets-9d5e6812-09fc-49df-a826-30b6b50a5a4c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.403337156s
STEP: Saw pod success
Feb 14 01:28:54.462: INFO: Pod "pod-projected-secrets-9d5e6812-09fc-49df-a826-30b6b50a5a4c" satisfied condition "success or failure"
Feb 14 01:28:54.470: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-9d5e6812-09fc-49df-a826-30b6b50a5a4c container secret-volume-test: 
STEP: delete the pod
Feb 14 01:28:54.591: INFO: Waiting for pod pod-projected-secrets-9d5e6812-09fc-49df-a826-30b6b50a5a4c to disappear
Feb 14 01:28:54.605: INFO: Pod pod-projected-secrets-9d5e6812-09fc-49df-a826-30b6b50a5a4c no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:28:54.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5663" for this suite.

• [SLOW TEST:12.766 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":280,"completed":246,"skipped":3858,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl label 
  should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:28:54.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1384
STEP: creating the pod
Feb 14 01:28:54.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5564'
Feb 14 01:28:58.336: INFO: stderr: ""
Feb 14 01:28:58.336: INFO: stdout: "pod/pause created\n"
Feb 14 01:28:58.336: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause]
Feb 14 01:28:58.337: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5564" to be "running and ready"
Feb 14 01:28:58.382: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 44.713516ms
Feb 14 01:29:00.392: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055584613s
Feb 14 01:29:02.405: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067899484s
Feb 14 01:29:04.414: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077020591s
Feb 14 01:29:06.424: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.086791156s
Feb 14 01:29:06.424: INFO: Pod "pause" satisfied condition "running and ready"
Feb 14 01:29:06.424: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause]
[It] should update the label on a resource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: adding the label testing-label with value testing-label-value to a pod
Feb 14 01:29:06.424: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5564'
Feb 14 01:29:06.600: INFO: stderr: ""
Feb 14 01:29:06.600: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod has the label testing-label with the value testing-label-value
Feb 14 01:29:06.600: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5564'
Feb 14 01:29:06.719: INFO: stderr: ""
Feb 14 01:29:06.719: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    testing-label-value\n"
STEP: removing the label testing-label of a pod
Feb 14 01:29:06.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5564'
Feb 14 01:29:06.833: INFO: stderr: ""
Feb 14 01:29:06.833: INFO: stdout: "pod/pause labeled\n"
STEP: verifying the pod doesn't have the label testing-label
Feb 14 01:29:06.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5564'
Feb 14 01:29:06.946: INFO: stderr: ""
Feb 14 01:29:06.946: INFO: stdout: "NAME    READY   STATUS    RESTARTS   AGE   TESTING-LABEL\npause   1/1     Running   0          8s    \n"
[AfterEach] Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1391
STEP: using delete to clean up resources
Feb 14 01:29:06.947: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5564'
Feb 14 01:29:07.069: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Feb 14 01:29:07.069: INFO: stdout: "pod \"pause\" force deleted\n"
Feb 14 01:29:07.069: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5564'
Feb 14 01:29:07.257: INFO: stderr: "No resources found in kubectl-5564 namespace.\n"
Feb 14 01:29:07.257: INFO: stdout: ""
Feb 14 01:29:07.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5564 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Feb 14 01:29:07.383: INFO: stderr: ""
Feb 14 01:29:07.383: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:29:07.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5564" for this suite.

• [SLOW TEST:12.759 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl label
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1381
    should update the label on a resource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource  [Conformance]","total":280,"completed":247,"skipped":3862,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:29:07.396: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:29:18.153: INFO: Waiting up to 5m0s for pod "client-envvars-f92e116b-c00f-4cd2-aa3e-4f6db9e753a4" in namespace "pods-585" to be "success or failure"
Feb 14 01:29:18.191: INFO: Pod "client-envvars-f92e116b-c00f-4cd2-aa3e-4f6db9e753a4": Phase="Pending", Reason="", readiness=false. Elapsed: 37.623468ms
Feb 14 01:29:20.198: INFO: Pod "client-envvars-f92e116b-c00f-4cd2-aa3e-4f6db9e753a4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044832841s
Feb 14 01:29:22.244: INFO: Pod "client-envvars-f92e116b-c00f-4cd2-aa3e-4f6db9e753a4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.090179475s
Feb 14 01:29:24.294: INFO: Pod "client-envvars-f92e116b-c00f-4cd2-aa3e-4f6db9e753a4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.140394074s
Feb 14 01:29:26.304: INFO: Pod "client-envvars-f92e116b-c00f-4cd2-aa3e-4f6db9e753a4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.150880721s
STEP: Saw pod success
Feb 14 01:29:26.305: INFO: Pod "client-envvars-f92e116b-c00f-4cd2-aa3e-4f6db9e753a4" satisfied condition "success or failure"
Feb 14 01:29:26.311: INFO: Trying to get logs from node jerma-node pod client-envvars-f92e116b-c00f-4cd2-aa3e-4f6db9e753a4 container env3cont: 
STEP: delete the pod
Feb 14 01:29:26.368: INFO: Waiting for pod client-envvars-f92e116b-c00f-4cd2-aa3e-4f6db9e753a4 to disappear
Feb 14 01:29:26.385: INFO: Pod client-envvars-f92e116b-c00f-4cd2-aa3e-4f6db9e753a4 no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:29:26.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-585" for this suite.

• [SLOW TEST:19.001 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":280,"completed":248,"skipped":3869,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:29:26.398: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Feb 14 01:29:26.746: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8324 /api/v1/namespaces/watch-8324/configmaps/e2e-watch-test-watch-closed 742aa1b2-3a82-4b66-8712-9e378bad450a 8287885 0 2020-02-14 01:29:26 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 14 01:29:26.746: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8324 /api/v1/namespaces/watch-8324/configmaps/e2e-watch-test-watch-closed 742aa1b2-3a82-4b66-8712-9e378bad450a 8287886 0 2020-02-14 01:29:26 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},Immutable:nil,}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Feb 14 01:29:26.771: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8324 /api/v1/namespaces/watch-8324/configmaps/e2e-watch-test-watch-closed 742aa1b2-3a82-4b66-8712-9e378bad450a 8287887 0 2020-02-14 01:29:26 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
Feb 14 01:29:26.771: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-8324 /api/v1/namespaces/watch-8324/configmaps/e2e-watch-test-watch-closed 742aa1b2-3a82-4b66-8712-9e378bad450a 8287888 0 2020-02-14 01:29:26 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},Immutable:nil,}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:29:26.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-8324" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":280,"completed":249,"skipped":3875,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:29:26.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-zt9t
STEP: Creating a pod to test atomic-volume-subpath
Feb 14 01:29:27.047: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-zt9t" in namespace "subpath-781" to be "success or failure"
Feb 14 01:29:27.064: INFO: Pod "pod-subpath-test-configmap-zt9t": Phase="Pending", Reason="", readiness=false. Elapsed: 17.026747ms
Feb 14 01:29:29.074: INFO: Pod "pod-subpath-test-configmap-zt9t": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026790835s
Feb 14 01:29:31.082: INFO: Pod "pod-subpath-test-configmap-zt9t": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034927046s
Feb 14 01:29:33.089: INFO: Pod "pod-subpath-test-configmap-zt9t": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042077009s
Feb 14 01:29:35.107: INFO: Pod "pod-subpath-test-configmap-zt9t": Phase="Pending", Reason="", readiness=false. Elapsed: 8.059474786s
Feb 14 01:29:37.113: INFO: Pod "pod-subpath-test-configmap-zt9t": Phase="Running", Reason="", readiness=true. Elapsed: 10.06610739s
Feb 14 01:29:39.119: INFO: Pod "pod-subpath-test-configmap-zt9t": Phase="Running", Reason="", readiness=true. Elapsed: 12.072029431s
Feb 14 01:29:41.129: INFO: Pod "pod-subpath-test-configmap-zt9t": Phase="Running", Reason="", readiness=true. Elapsed: 14.081317046s
Feb 14 01:29:43.135: INFO: Pod "pod-subpath-test-configmap-zt9t": Phase="Running", Reason="", readiness=true. Elapsed: 16.087984872s
Feb 14 01:29:45.141: INFO: Pod "pod-subpath-test-configmap-zt9t": Phase="Running", Reason="", readiness=true. Elapsed: 18.094049992s
Feb 14 01:29:47.223: INFO: Pod "pod-subpath-test-configmap-zt9t": Phase="Running", Reason="", readiness=true. Elapsed: 20.175437049s
Feb 14 01:29:49.233: INFO: Pod "pod-subpath-test-configmap-zt9t": Phase="Running", Reason="", readiness=true. Elapsed: 22.185790132s
Feb 14 01:29:51.240: INFO: Pod "pod-subpath-test-configmap-zt9t": Phase="Running", Reason="", readiness=true. Elapsed: 24.192265929s
Feb 14 01:29:53.244: INFO: Pod "pod-subpath-test-configmap-zt9t": Phase="Running", Reason="", readiness=true. Elapsed: 26.197096811s
Feb 14 01:29:55.462: INFO: Pod "pod-subpath-test-configmap-zt9t": Phase="Running", Reason="", readiness=true. Elapsed: 28.415240947s
Feb 14 01:29:57.480: INFO: Pod "pod-subpath-test-configmap-zt9t": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.432307209s
STEP: Saw pod success
Feb 14 01:29:57.480: INFO: Pod "pod-subpath-test-configmap-zt9t" satisfied condition "success or failure"
Feb 14 01:29:57.488: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-zt9t container test-container-subpath-configmap-zt9t: 
STEP: delete the pod
Feb 14 01:29:57.833: INFO: Waiting for pod pod-subpath-test-configmap-zt9t to disappear
Feb 14 01:29:57.842: INFO: Pod pod-subpath-test-configmap-zt9t no longer exists
STEP: Deleting pod pod-subpath-test-configmap-zt9t
Feb 14 01:29:57.843: INFO: Deleting pod "pod-subpath-test-configmap-zt9t" in namespace "subpath-781"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:29:57.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-781" for this suite.

• [SLOW TEST:31.079 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":280,"completed":250,"skipped":3942,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:29:57.882: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Feb 14 01:29:58.563: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Feb 14 01:30:00.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240598, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240598, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240598, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240598, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:30:02.614: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240598, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240598, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240598, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240598, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:30:04.620: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240598, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240598, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240598, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240598, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 14 01:30:07.665: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:30:07.672: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:30:09.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-3259" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:11.490 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":280,"completed":251,"skipped":3943,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:30:09.374: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-e018b026-15bb-48be-a14c-b44086b6e102
STEP: Creating a pod to test consume configMaps
Feb 14 01:30:09.493: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-07a82c6b-638b-4972-b76a-94d789419197" in namespace "projected-7219" to be "success or failure"
Feb 14 01:30:09.584: INFO: Pod "pod-projected-configmaps-07a82c6b-638b-4972-b76a-94d789419197": Phase="Pending", Reason="", readiness=false. Elapsed: 91.290936ms
Feb 14 01:30:11.593: INFO: Pod "pod-projected-configmaps-07a82c6b-638b-4972-b76a-94d789419197": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100202212s
Feb 14 01:30:13.644: INFO: Pod "pod-projected-configmaps-07a82c6b-638b-4972-b76a-94d789419197": Phase="Pending", Reason="", readiness=false. Elapsed: 4.150998338s
Feb 14 01:30:15.652: INFO: Pod "pod-projected-configmaps-07a82c6b-638b-4972-b76a-94d789419197": Phase="Pending", Reason="", readiness=false. Elapsed: 6.159088088s
Feb 14 01:30:17.664: INFO: Pod "pod-projected-configmaps-07a82c6b-638b-4972-b76a-94d789419197": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170649921s
Feb 14 01:30:19.676: INFO: Pod "pod-projected-configmaps-07a82c6b-638b-4972-b76a-94d789419197": Phase="Pending", Reason="", readiness=false. Elapsed: 10.182609528s
Feb 14 01:30:21.684: INFO: Pod "pod-projected-configmaps-07a82c6b-638b-4972-b76a-94d789419197": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.191390146s
STEP: Saw pod success
Feb 14 01:30:21.685: INFO: Pod "pod-projected-configmaps-07a82c6b-638b-4972-b76a-94d789419197" satisfied condition "success or failure"
Feb 14 01:30:21.689: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-07a82c6b-638b-4972-b76a-94d789419197 container projected-configmap-volume-test: 
STEP: delete the pod
Feb 14 01:30:21.730: INFO: Waiting for pod pod-projected-configmaps-07a82c6b-638b-4972-b76a-94d789419197 to disappear
Feb 14 01:30:21.736: INFO: Pod pod-projected-configmaps-07a82c6b-638b-4972-b76a-94d789419197 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:30:21.737: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7219" for this suite.

• [SLOW TEST:12.473 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":252,"skipped":4021,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:30:21.848: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating server pod server in namespace prestop-1228
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-1228
STEP: Deleting pre-stop pod
Feb 14 01:30:45.125: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:30:45.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-1228" for this suite.

• [SLOW TEST:23.360 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":280,"completed":253,"skipped":4027,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:30:45.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 14 01:30:46.226: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 14 01:30:48.248: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240646, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240646, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240646, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240646, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:30:50.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240646, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240646, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240646, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240646, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:30:52.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240646, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240646, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240646, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240646, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:30:54.298: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240646, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240646, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240646, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240646, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 14 01:30:57.360: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:30:57.375: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-3301-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:30:58.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4138" for this suite.
STEP: Destroying namespace "webhook-4138-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.713 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":280,"completed":254,"skipped":4028,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:30:58.923: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name s-test-opt-del-54e9a2cb-cc3e-4bbc-9320-6b083ef90c7a
STEP: Creating secret with name s-test-opt-upd-e42fdbab-a1e2-414d-b66c-c0bcf77cdeb6
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-54e9a2cb-cc3e-4bbc-9320-6b083ef90c7a
STEP: Updating secret s-test-opt-upd-e42fdbab-a1e2-414d-b66c-c0bcf77cdeb6
STEP: Creating secret with name s-test-opt-create-5d174b38-d987-4629-a24e-62d35530cb43
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:32:20.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-221" for this suite.

• [SLOW TEST:81.235 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":280,"completed":255,"skipped":4030,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:32:20.159: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:88
Feb 14 01:32:20.267: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Feb 14 01:32:20.287: INFO: Waiting for terminating namespaces to be deleted...
Feb 14 01:32:20.289: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Feb 14 01:32:20.295: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Feb 14 01:32:20.295: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 01:32:20.295: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Feb 14 01:32:20.295: INFO: 	Container weave ready: true, restart count 1
Feb 14 01:32:20.295: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 01:32:20.295: INFO: pod-projected-secrets-f10b5bba-c937-49ef-9358-4572af9903c5 from projected-221 started at 2020-02-14 01:30:59 +0000 UTC (3 container statuses recorded)
Feb 14 01:32:20.295: INFO: 	Container creates-volume-test ready: true, restart count 0
Feb 14 01:32:20.295: INFO: 	Container dels-volume-test ready: true, restart count 0
Feb 14 01:32:20.295: INFO: 	Container upds-volume-test ready: true, restart count 0
Feb 14 01:32:20.295: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Feb 14 01:32:20.317: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 14 01:32:20.317: INFO: 	Container kube-apiserver ready: true, restart count 1
Feb 14 01:32:20.317: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 14 01:32:20.317: INFO: 	Container etcd ready: true, restart count 1
Feb 14 01:32:20.317: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 14 01:32:20.317: INFO: 	Container coredns ready: true, restart count 0
Feb 14 01:32:20.317: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Feb 14 01:32:20.317: INFO: 	Container coredns ready: true, restart count 0
Feb 14 01:32:20.317: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Feb 14 01:32:20.317: INFO: 	Container kube-controller-manager ready: true, restart count 7
Feb 14 01:32:20.317: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Feb 14 01:32:20.318: INFO: 	Container kube-proxy ready: true, restart count 0
Feb 14 01:32:20.318: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Feb 14 01:32:20.318: INFO: 	Container weave ready: true, restart count 0
Feb 14 01:32:20.318: INFO: 	Container weave-npc ready: true, restart count 0
Feb 14 01:32:20.318: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Feb 14 01:32:20.318: INFO: 	Container kube-scheduler ready: true, restart count 11
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-mvvl6gufaqub
Feb 14 01:32:20.452: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 14 01:32:20.452: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 14 01:32:20.452: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Feb 14 01:32:20.452: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub
Feb 14 01:32:20.452: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub
Feb 14 01:32:20.452: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Feb 14 01:32:20.452: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node
Feb 14 01:32:20.452: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Feb 14 01:32:20.452: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node
Feb 14 01:32:20.452: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub
Feb 14 01:32:20.452: INFO: Pod pod-projected-secrets-f10b5bba-c937-49ef-9358-4572af9903c5 requesting resource cpu=0m on Node jerma-node
STEP: Starting Pods to consume most of the cluster CPU.
Feb 14 01:32:20.452: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
Feb 14 01:32:20.468: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-53cef846-ef6f-49a3-a668-1ed963b117fd.15f320c8b5b7bb6b], Reason = [Scheduled], Message = [Successfully assigned sched-pred-866/filler-pod-53cef846-ef6f-49a3-a668-1ed963b117fd to jerma-server-mvvl6gufaqub]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-53cef846-ef6f-49a3-a668-1ed963b117fd.15f320caa1c04c45], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-53cef846-ef6f-49a3-a668-1ed963b117fd.15f320cc99d1d23e], Reason = [Created], Message = [Created container filler-pod-53cef846-ef6f-49a3-a668-1ed963b117fd]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-53cef846-ef6f-49a3-a668-1ed963b117fd.15f320ccba3dfac4], Reason = [Started], Message = [Started container filler-pod-53cef846-ef6f-49a3-a668-1ed963b117fd]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a8840af3-ea10-4f92-8ec0-09f281cbf1e1.15f320c8b4715d1a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-866/filler-pod-a8840af3-ea10-4f92-8ec0-09f281cbf1e1 to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a8840af3-ea10-4f92-8ec0-09f281cbf1e1.15f320ca3cae2749], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a8840af3-ea10-4f92-8ec0-09f281cbf1e1.15f320cca7f920fa], Reason = [Created], Message = [Created container filler-pod-a8840af3-ea10-4f92-8ec0-09f281cbf1e1]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-a8840af3-ea10-4f92-8ec0-09f281cbf1e1.15f320ccea5efb57], Reason = [Started], Message = [Started container filler-pod-a8840af3-ea10-4f92-8ec0-09f281cbf1e1]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15f320cd624a0be6], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:32:41.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-866" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:79

• [SLOW TEST:21.629 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":280,"completed":256,"skipped":4037,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Lease 
  lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:32:41.793: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename lease-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] lease API should be available [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Lease
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:32:42.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "lease-test-8988" for this suite.
•{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":280,"completed":257,"skipped":4070,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:32:42.077: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Feb 14 01:32:42.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-1625'
Feb 14 01:32:42.351: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Feb 14 01:32:42.351: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795
Feb 14 01:32:42.364: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-1625'
Feb 14 01:32:42.501: INFO: stderr: ""
Feb 14 01:32:42.502: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:32:42.502: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1625" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":280,"completed":258,"skipped":4099,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:32:42.554: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-1c976be6-c73b-430d-9b6e-d14d9b120ea9
STEP: Creating a pod to test consume configMaps
Feb 14 01:32:42.629: INFO: Waiting up to 5m0s for pod "pod-configmaps-b6d4ca51-bb66-4e2c-9ffc-55a88efff1b0" in namespace "configmap-2134" to be "success or failure"
Feb 14 01:32:42.647: INFO: Pod "pod-configmaps-b6d4ca51-bb66-4e2c-9ffc-55a88efff1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 17.626661ms
Feb 14 01:32:44.930: INFO: Pod "pod-configmaps-b6d4ca51-bb66-4e2c-9ffc-55a88efff1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300385471s
Feb 14 01:32:46.942: INFO: Pod "pod-configmaps-b6d4ca51-bb66-4e2c-9ffc-55a88efff1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.312666845s
Feb 14 01:32:49.058: INFO: Pod "pod-configmaps-b6d4ca51-bb66-4e2c-9ffc-55a88efff1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.42928614s
Feb 14 01:32:51.837: INFO: Pod "pod-configmaps-b6d4ca51-bb66-4e2c-9ffc-55a88efff1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 9.208101205s
Feb 14 01:32:54.133: INFO: Pod "pod-configmaps-b6d4ca51-bb66-4e2c-9ffc-55a88efff1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.503871012s
Feb 14 01:32:56.160: INFO: Pod "pod-configmaps-b6d4ca51-bb66-4e2c-9ffc-55a88efff1b0": Phase="Pending", Reason="", readiness=false. Elapsed: 13.530772167s
Feb 14 01:32:58.168: INFO: Pod "pod-configmaps-b6d4ca51-bb66-4e2c-9ffc-55a88efff1b0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.538679982s
STEP: Saw pod success
Feb 14 01:32:58.168: INFO: Pod "pod-configmaps-b6d4ca51-bb66-4e2c-9ffc-55a88efff1b0" satisfied condition "success or failure"
Feb 14 01:32:58.171: INFO: Trying to get logs from node jerma-server-mvvl6gufaqub pod pod-configmaps-b6d4ca51-bb66-4e2c-9ffc-55a88efff1b0 container configmap-volume-test: 
STEP: delete the pod
Feb 14 01:32:58.243: INFO: Waiting for pod pod-configmaps-b6d4ca51-bb66-4e2c-9ffc-55a88efff1b0 to disappear
Feb 14 01:32:58.247: INFO: Pod pod-configmaps-b6d4ca51-bb66-4e2c-9ffc-55a88efff1b0 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:32:58.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2134" for this suite.

• [SLOW TEST:15.707 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":259,"skipped":4106,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:32:58.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-7777dbef-75b5-4118-82ec-46c2e155f0f2
STEP: Creating a pod to test consume secrets
Feb 14 01:32:58.356: INFO: Waiting up to 5m0s for pod "pod-secrets-453a8b9a-0732-4ad7-9aba-00ecea876655" in namespace "secrets-8553" to be "success or failure"
Feb 14 01:32:58.376: INFO: Pod "pod-secrets-453a8b9a-0732-4ad7-9aba-00ecea876655": Phase="Pending", Reason="", readiness=false. Elapsed: 19.03832ms
Feb 14 01:33:00.391: INFO: Pod "pod-secrets-453a8b9a-0732-4ad7-9aba-00ecea876655": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034895996s
Feb 14 01:33:02.399: INFO: Pod "pod-secrets-453a8b9a-0732-4ad7-9aba-00ecea876655": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042854825s
Feb 14 01:33:04.409: INFO: Pod "pod-secrets-453a8b9a-0732-4ad7-9aba-00ecea876655": Phase="Pending", Reason="", readiness=false. Elapsed: 6.05250143s
Feb 14 01:33:06.440: INFO: Pod "pod-secrets-453a8b9a-0732-4ad7-9aba-00ecea876655": Phase="Pending", Reason="", readiness=false. Elapsed: 8.083043318s
Feb 14 01:33:08.478: INFO: Pod "pod-secrets-453a8b9a-0732-4ad7-9aba-00ecea876655": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.12138324s
STEP: Saw pod success
Feb 14 01:33:08.478: INFO: Pod "pod-secrets-453a8b9a-0732-4ad7-9aba-00ecea876655" satisfied condition "success or failure"
Feb 14 01:33:08.485: INFO: Trying to get logs from node jerma-node pod pod-secrets-453a8b9a-0732-4ad7-9aba-00ecea876655 container secret-volume-test: 
STEP: delete the pod
Feb 14 01:33:08.544: INFO: Waiting for pod pod-secrets-453a8b9a-0732-4ad7-9aba-00ecea876655 to disappear
Feb 14 01:33:08.614: INFO: Pod pod-secrets-453a8b9a-0732-4ad7-9aba-00ecea876655 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:33:08.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8553" for this suite.

• [SLOW TEST:10.373 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":260,"skipped":4113,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:33:08.637: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:332
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the initial replication controller
Feb 14 01:33:08.696: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7186'
Feb 14 01:33:09.137: INFO: stderr: ""
Feb 14 01:33:09.137: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 01:33:09.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7186'
Feb 14 01:33:09.320: INFO: stderr: ""
Feb 14 01:33:09.320: INFO: stdout: "update-demo-nautilus-2l8lq update-demo-nautilus-pmdns "
Feb 14 01:33:09.320: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l8lq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7186'
Feb 14 01:33:09.483: INFO: stderr: ""
Feb 14 01:33:09.484: INFO: stdout: ""
Feb 14 01:33:09.484: INFO: update-demo-nautilus-2l8lq is created but not running
Feb 14 01:33:14.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7186'
Feb 14 01:33:14.621: INFO: stderr: ""
Feb 14 01:33:14.621: INFO: stdout: "update-demo-nautilus-2l8lq update-demo-nautilus-pmdns "
Feb 14 01:33:14.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l8lq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7186'
Feb 14 01:33:17.670: INFO: stderr: ""
Feb 14 01:33:17.671: INFO: stdout: ""
Feb 14 01:33:17.671: INFO: update-demo-nautilus-2l8lq is created but not running
Feb 14 01:33:22.672: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7186'
Feb 14 01:33:22.784: INFO: stderr: ""
Feb 14 01:33:22.784: INFO: stdout: "update-demo-nautilus-2l8lq update-demo-nautilus-pmdns "
Feb 14 01:33:22.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l8lq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7186'
Feb 14 01:33:22.862: INFO: stderr: ""
Feb 14 01:33:22.862: INFO: stdout: "true"
Feb 14 01:33:22.863: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-2l8lq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7186'
Feb 14 01:33:22.972: INFO: stderr: ""
Feb 14 01:33:22.972: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 01:33:22.972: INFO: validating pod update-demo-nautilus-2l8lq
Feb 14 01:33:22.999: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 01:33:22.999: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 01:33:22.999: INFO: update-demo-nautilus-2l8lq is verified up and running
Feb 14 01:33:22.999: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pmdns -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7186'
Feb 14 01:33:23.112: INFO: stderr: ""
Feb 14 01:33:23.112: INFO: stdout: "true"
Feb 14 01:33:23.112: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-pmdns -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7186'
Feb 14 01:33:23.206: INFO: stderr: ""
Feb 14 01:33:23.206: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Feb 14 01:33:23.206: INFO: validating pod update-demo-nautilus-pmdns
Feb 14 01:33:23.215: INFO: got data: {
  "image": "nautilus.jpg"
}

Feb 14 01:33:23.215: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Feb 14 01:33:23.215: INFO: update-demo-nautilus-pmdns is verified up and running
STEP: rolling-update to new replication controller
Feb 14 01:33:23.218: INFO: scanned /root for discovery docs: 
Feb 14 01:33:23.218: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-7186'
Feb 14 01:33:54.337: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Feb 14 01:33:54.338: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Feb 14 01:33:54.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-7186'
Feb 14 01:33:54.463: INFO: stderr: ""
Feb 14 01:33:54.463: INFO: stdout: "update-demo-kitten-b4vs4 update-demo-kitten-tw548 "
Feb 14 01:33:54.463: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-b4vs4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7186'
Feb 14 01:33:54.642: INFO: stderr: ""
Feb 14 01:33:54.642: INFO: stdout: "true"
Feb 14 01:33:54.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-b4vs4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7186'
Feb 14 01:33:54.779: INFO: stderr: ""
Feb 14 01:33:54.780: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 14 01:33:54.780: INFO: validating pod update-demo-kitten-b4vs4
Feb 14 01:33:54.792: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 14 01:33:54.792: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 14 01:33:54.792: INFO: update-demo-kitten-b4vs4 is verified up and running
Feb 14 01:33:54.792: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tw548 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7186'
Feb 14 01:33:54.903: INFO: stderr: ""
Feb 14 01:33:54.903: INFO: stdout: "true"
Feb 14 01:33:54.903: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-tw548 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7186'
Feb 14 01:33:54.997: INFO: stderr: ""
Feb 14 01:33:54.997: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Feb 14 01:33:54.998: INFO: validating pod update-demo-kitten-tw548
Feb 14 01:33:55.015: INFO: got data: {
  "image": "kitten.jpg"
}

Feb 14 01:33:55.015: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Feb 14 01:33:55.015: INFO: update-demo-kitten-tw548 is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:33:55.015: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7186" for this suite.

• [SLOW TEST:46.397 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:330
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller  [Conformance]","total":280,"completed":261,"skipped":4130,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:33:55.035: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Feb 14 01:34:08.782: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:34:09.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-927" for this suite.

• [SLOW TEST:14.887 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":280,"completed":262,"skipped":4133,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
S
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:34:09.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-6a805f54-6469-40e4-970b-a3f6840fcf37
STEP: Creating a pod to test consume secrets
Feb 14 01:34:10.093: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4a1a32c3-4072-4182-9822-ff0fbeb8f03b" in namespace "projected-7555" to be "success or failure"
Feb 14 01:34:10.154: INFO: Pod "pod-projected-secrets-4a1a32c3-4072-4182-9822-ff0fbeb8f03b": Phase="Pending", Reason="", readiness=false. Elapsed: 60.699133ms
Feb 14 01:34:12.164: INFO: Pod "pod-projected-secrets-4a1a32c3-4072-4182-9822-ff0fbeb8f03b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.071351062s
Feb 14 01:34:14.175: INFO: Pod "pod-projected-secrets-4a1a32c3-4072-4182-9822-ff0fbeb8f03b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081902514s
Feb 14 01:34:16.223: INFO: Pod "pod-projected-secrets-4a1a32c3-4072-4182-9822-ff0fbeb8f03b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.130017872s
Feb 14 01:34:18.235: INFO: Pod "pod-projected-secrets-4a1a32c3-4072-4182-9822-ff0fbeb8f03b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.142078763s
Feb 14 01:34:20.247: INFO: Pod "pod-projected-secrets-4a1a32c3-4072-4182-9822-ff0fbeb8f03b": Phase="Pending", Reason="", readiness=false. Elapsed: 10.154473031s
Feb 14 01:34:22.264: INFO: Pod "pod-projected-secrets-4a1a32c3-4072-4182-9822-ff0fbeb8f03b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.171379977s
Feb 14 01:34:24.271: INFO: Pod "pod-projected-secrets-4a1a32c3-4072-4182-9822-ff0fbeb8f03b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.177836932s
Feb 14 01:34:26.280: INFO: Pod "pod-projected-secrets-4a1a32c3-4072-4182-9822-ff0fbeb8f03b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.187230591s
STEP: Saw pod success
Feb 14 01:34:26.280: INFO: Pod "pod-projected-secrets-4a1a32c3-4072-4182-9822-ff0fbeb8f03b" satisfied condition "success or failure"
Feb 14 01:34:26.288: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-4a1a32c3-4072-4182-9822-ff0fbeb8f03b container projected-secret-volume-test: 
STEP: delete the pod
Feb 14 01:34:26.356: INFO: Waiting for pod pod-projected-secrets-4a1a32c3-4072-4182-9822-ff0fbeb8f03b to disappear
Feb 14 01:34:26.361: INFO: Pod pod-projected-secrets-4a1a32c3-4072-4182-9822-ff0fbeb8f03b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:34:26.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7555" for this suite.

• [SLOW TEST:16.526 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":280,"completed":263,"skipped":4134,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:34:26.453: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:34:36.725: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-6128" for this suite.

• [SLOW TEST:10.298 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":264,"skipped":4175,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:34:36.753: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Feb 14 01:34:45.484: INFO: Successfully updated pod "annotationupdated22a0dc2-4c81-4085-9cca-3410a4a70d8b"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:34:47.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2243" for this suite.

• [SLOW TEST:10.805 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":280,"completed":265,"skipped":4188,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:34:47.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7076.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-7076.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7076.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-7076.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-7076.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7076.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Feb 14 01:35:01.779: INFO: DNS probes using dns-7076/dns-test-c655203f-7294-469c-af26-53de13f33483 succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:35:01.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7076" for this suite.

• [SLOW TEST:14.394 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":280,"completed":266,"skipped":4275,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:35:01.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0214 01:35:05.243799       9 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Feb 14 01:35:05.243: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:35:05.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5895" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":280,"completed":267,"skipped":4310,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:35:05.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-downwardapi-cs2g
STEP: Creating a pod to test atomic-volume-subpath
Feb 14 01:35:06.665: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-cs2g" in namespace "subpath-5076" to be "success or failure"
Feb 14 01:35:06.934: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Pending", Reason="", readiness=false. Elapsed: 268.666187ms
Feb 14 01:35:08.944: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Pending", Reason="", readiness=false. Elapsed: 2.279013153s
Feb 14 01:35:11.836: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Pending", Reason="", readiness=false. Elapsed: 5.171247004s
Feb 14 01:35:13.846: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Pending", Reason="", readiness=false. Elapsed: 7.180701288s
Feb 14 01:35:15.856: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Pending", Reason="", readiness=false. Elapsed: 9.191042064s
Feb 14 01:35:17.871: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Pending", Reason="", readiness=false. Elapsed: 11.206536322s
Feb 14 01:35:19.880: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Pending", Reason="", readiness=false. Elapsed: 13.21511651s
Feb 14 01:35:21.888: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Running", Reason="", readiness=true. Elapsed: 15.222995569s
Feb 14 01:35:23.897: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Running", Reason="", readiness=true. Elapsed: 17.231998506s
Feb 14 01:35:25.904: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Running", Reason="", readiness=true. Elapsed: 19.239152165s
Feb 14 01:35:27.912: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Running", Reason="", readiness=true. Elapsed: 21.247631375s
Feb 14 01:35:29.921: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Running", Reason="", readiness=true. Elapsed: 23.256007787s
Feb 14 01:35:31.932: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Running", Reason="", readiness=true. Elapsed: 25.267543554s
Feb 14 01:35:33.945: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Running", Reason="", readiness=true. Elapsed: 27.280450669s
Feb 14 01:35:35.967: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Running", Reason="", readiness=true. Elapsed: 29.301770664s
Feb 14 01:35:37.974: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Running", Reason="", readiness=true. Elapsed: 31.309476049s
Feb 14 01:35:39.981: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Running", Reason="", readiness=true. Elapsed: 33.315892642s
Feb 14 01:35:41.989: INFO: Pod "pod-subpath-test-downwardapi-cs2g": Phase="Succeeded", Reason="", readiness=false. Elapsed: 35.323954174s
STEP: Saw pod success
Feb 14 01:35:41.989: INFO: Pod "pod-subpath-test-downwardapi-cs2g" satisfied condition "success or failure"
Feb 14 01:35:41.994: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-cs2g container test-container-subpath-downwardapi-cs2g: 
STEP: delete the pod
Feb 14 01:35:42.046: INFO: Waiting for pod pod-subpath-test-downwardapi-cs2g to disappear
Feb 14 01:35:42.834: INFO: Pod pod-subpath-test-downwardapi-cs2g no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-cs2g
Feb 14 01:35:42.835: INFO: Deleting pod "pod-subpath-test-downwardapi-cs2g" in namespace "subpath-5076"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:35:42.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-5076" for this suite.

• [SLOW TEST:37.591 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":280,"completed":268,"skipped":4347,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:35:42.867: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on node default medium
Feb 14 01:35:42.983: INFO: Waiting up to 5m0s for pod "pod-9e3b9a5f-8546-4982-9189-d2b2b06b6fd9" in namespace "emptydir-7657" to be "success or failure"
Feb 14 01:35:42.992: INFO: Pod "pod-9e3b9a5f-8546-4982-9189-d2b2b06b6fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 9.128311ms
Feb 14 01:35:45.006: INFO: Pod "pod-9e3b9a5f-8546-4982-9189-d2b2b06b6fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02283216s
Feb 14 01:35:47.013: INFO: Pod "pod-9e3b9a5f-8546-4982-9189-d2b2b06b6fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029628299s
Feb 14 01:35:49.022: INFO: Pod "pod-9e3b9a5f-8546-4982-9189-d2b2b06b6fd9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038570124s
Feb 14 01:35:51.035: INFO: Pod "pod-9e3b9a5f-8546-4982-9189-d2b2b06b6fd9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051455443s
STEP: Saw pod success
Feb 14 01:35:51.035: INFO: Pod "pod-9e3b9a5f-8546-4982-9189-d2b2b06b6fd9" satisfied condition "success or failure"
Feb 14 01:35:51.040: INFO: Trying to get logs from node jerma-node pod pod-9e3b9a5f-8546-4982-9189-d2b2b06b6fd9 container test-container: 
STEP: delete the pod
Feb 14 01:35:51.076: INFO: Waiting for pod pod-9e3b9a5f-8546-4982-9189-d2b2b06b6fd9 to disappear
Feb 14 01:35:51.083: INFO: Pod pod-9e3b9a5f-8546-4982-9189-d2b2b06b6fd9 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:35:51.083: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7657" for this suite.

• [SLOW TEST:10.446 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":269,"skipped":4384,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:35:53.316: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Feb 14 01:35:54.523: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Feb 14 01:35:56.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240954, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240954, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240954, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240954, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:35:58.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240954, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240954, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240954, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240954, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:36:00.581: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240954, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240954, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240954, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240954, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Feb 14 01:36:03.079: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240954, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240954, loc:(*time.Location)(0x7e52ca0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240954, loc:(*time.Location)(0x7e52ca0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63717240954, loc:(*time.Location)(0x7e52ca0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Feb 14 01:36:05.631: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:36:05.850: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8257" for this suite.
STEP: Destroying namespace "webhook-8257-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.683 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":280,"completed":270,"skipped":4404,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:36:06.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:36:06.261: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Feb 14 01:36:08.723: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:36:08.823: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-4660" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":280,"completed":271,"skipped":4488,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:36:08.948: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:84
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:99
STEP: Creating service test in namespace statefulset-2254
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a new StatefulSet
Feb 14 01:36:10.633: INFO: Found 0 stateful pods, waiting for 3
Feb 14 01:36:22.353: INFO: Found 1 stateful pods, waiting for 3
Feb 14 01:36:30.647: INFO: Found 2 stateful pods, waiting for 3
Feb 14 01:36:40.651: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 01:36:40.651: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 01:36:40.651: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 14 01:36:50.643: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 01:36:50.644: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 01:36:50.644: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Feb 14 01:36:50.690: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Feb 14 01:37:00.781: INFO: Updating stateful set ss2
Feb 14 01:37:00.801: INFO: Waiting for Pod statefulset-2254/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 14 01:37:10.815: INFO: Waiting for Pod statefulset-2254/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Feb 14 01:37:21.174: INFO: Found 2 stateful pods, waiting for 3
Feb 14 01:37:31.597: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 01:37:31.597: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 01:37:31.597: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Feb 14 01:37:41.183: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 01:37:41.183: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Feb 14 01:37:41.183: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Feb 14 01:37:41.212: INFO: Updating stateful set ss2
Feb 14 01:37:41.237: INFO: Waiting for Pod statefulset-2254/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 14 01:37:51.283: INFO: Updating stateful set ss2
Feb 14 01:37:51.546: INFO: Waiting for StatefulSet statefulset-2254/ss2 to complete update
Feb 14 01:37:51.546: INFO: Waiting for Pod statefulset-2254/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Feb 14 01:38:01.561: INFO: Waiting for StatefulSet statefulset-2254/ss2 to complete update
Feb 14 01:38:01.561: INFO: Waiting for Pod statefulset-2254/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:110
Feb 14 01:38:11.568: INFO: Deleting all statefulset in ns statefulset-2254
Feb 14 01:38:11.573: INFO: Scaling statefulset ss2 to 0
Feb 14 01:38:51.606: INFO: Waiting for statefulset status.replicas updated to 0
Feb 14 01:38:51.612: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:38:51.650: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-2254" for this suite.

• [SLOW TEST:162.713 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":280,"completed":272,"skipped":4496,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:38:51.663: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:39:03.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7996" for this suite.

• [SLOW TEST:12.260 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":280,"completed":273,"skipped":4512,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:39:03.925: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:39:04.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3122" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":280,"completed":274,"skipped":4518,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:39:04.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:41
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Feb 14 01:39:04.415: INFO: Waiting up to 5m0s for pod "downwardapi-volume-878e621d-0abe-4376-a9d4-61b4fb98745b" in namespace "downward-api-170" to be "success or failure"
Feb 14 01:39:04.430: INFO: Pod "downwardapi-volume-878e621d-0abe-4376-a9d4-61b4fb98745b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.27494ms
Feb 14 01:39:06.440: INFO: Pod "downwardapi-volume-878e621d-0abe-4376-a9d4-61b4fb98745b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025288011s
Feb 14 01:39:08.450: INFO: Pod "downwardapi-volume-878e621d-0abe-4376-a9d4-61b4fb98745b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035170858s
Feb 14 01:39:10.626: INFO: Pod "downwardapi-volume-878e621d-0abe-4376-a9d4-61b4fb98745b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.211243554s
Feb 14 01:39:12.635: INFO: Pod "downwardapi-volume-878e621d-0abe-4376-a9d4-61b4fb98745b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.219967744s
STEP: Saw pod success
Feb 14 01:39:12.635: INFO: Pod "downwardapi-volume-878e621d-0abe-4376-a9d4-61b4fb98745b" satisfied condition "success or failure"
Feb 14 01:39:12.700: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-878e621d-0abe-4376-a9d4-61b4fb98745b container client-container: 
STEP: delete the pod
Feb 14 01:39:12.744: INFO: Waiting for pod downwardapi-volume-878e621d-0abe-4376-a9d4-61b4fb98745b to disappear
Feb 14 01:39:12.751: INFO: Pod downwardapi-volume-878e621d-0abe-4376-a9d4-61b4fb98745b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:39:12.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-170" for this suite.

• [SLOW TEST:8.559 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:36
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":280,"completed":275,"skipped":4520,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:39:12.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:41
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:39:12.940: INFO: Waiting up to 5m0s for pod "busybox-user-65534-5c94daf4-0996-4b9d-9f97-fcdded75ac9b" in namespace "security-context-test-742" to be "success or failure"
Feb 14 01:39:12.960: INFO: Pod "busybox-user-65534-5c94daf4-0996-4b9d-9f97-fcdded75ac9b": Phase="Pending", Reason="", readiness=false. Elapsed: 20.396778ms
Feb 14 01:39:14.968: INFO: Pod "busybox-user-65534-5c94daf4-0996-4b9d-9f97-fcdded75ac9b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028498667s
Feb 14 01:39:16.979: INFO: Pod "busybox-user-65534-5c94daf4-0996-4b9d-9f97-fcdded75ac9b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039439889s
Feb 14 01:39:18.990: INFO: Pod "busybox-user-65534-5c94daf4-0996-4b9d-9f97-fcdded75ac9b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050056711s
Feb 14 01:39:21.015: INFO: Pod "busybox-user-65534-5c94daf4-0996-4b9d-9f97-fcdded75ac9b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.075263051s
Feb 14 01:39:21.015: INFO: Pod "busybox-user-65534-5c94daf4-0996-4b9d-9f97-fcdded75ac9b" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:39:21.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-742" for this suite.

• [SLOW TEST:8.258 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:45
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":280,"completed":276,"skipped":4542,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:39:21.031: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Feb 14 01:39:21.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Feb 14 01:39:22.883: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-14T01:39:22Z generation:1 name:name1 resourceVersion:8290513 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:fadfd6db-7fc3-4174-abe3-d811ee81b309] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Feb 14 01:39:32.899: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-14T01:39:32Z generation:1 name:name2 resourceVersion:8290545 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8359e381-8563-41a4-9f98-6e89845d87ba] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Feb 14 01:39:42.924: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-14T01:39:22Z generation:2 name:name1 resourceVersion:8290569 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:fadfd6db-7fc3-4174-abe3-d811ee81b309] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Feb 14 01:39:52.937: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-14T01:39:32Z generation:2 name:name2 resourceVersion:8290597 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8359e381-8563-41a4-9f98-6e89845d87ba] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Feb 14 01:40:02.949: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-14T01:39:22Z generation:2 name:name1 resourceVersion:8290623 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:fadfd6db-7fc3-4174-abe3-d811ee81b309] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Feb 14 01:40:12.971: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-02-14T01:39:32Z generation:2 name:name2 resourceVersion:8290647 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:8359e381-8563-41a4-9f98-6e89845d87ba] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:40:23.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-5981" for this suite.

• [SLOW TEST:62.481 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":280,"completed":277,"skipped":4547,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl logs 
  should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:40:23.513: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:280
[BeforeEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1466
STEP: creating an pod
Feb 14 01:40:23.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-4382 -- logs-generator --log-lines-total 100 --run-duration 20s'
Feb 14 01:40:26.191: INFO: stderr: ""
Feb 14 01:40:26.192: INFO: stdout: "pod/logs-generator created\n"
[It] should be able to retrieve and filter logs  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Waiting for log generator to start.
Feb 14 01:40:26.192: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator]
Feb 14 01:40:26.192: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-4382" to be "running and ready, or succeeded"
Feb 14 01:40:26.242: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 50.139714ms
Feb 14 01:40:28.254: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061579951s
Feb 14 01:40:30.375: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.182504568s
Feb 14 01:40:32.381: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18909583s
Feb 14 01:40:34.390: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.19802425s
Feb 14 01:40:34.390: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded"
Feb 14 01:40:34.390: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator]
STEP: checking for a matching strings
Feb 14 01:40:34.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4382'
Feb 14 01:40:34.676: INFO: stderr: ""
Feb 14 01:40:34.676: INFO: stdout: "I0214 01:40:33.091094       1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/gl2 262\nI0214 01:40:33.291344       1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/jmc 305\nI0214 01:40:33.491554       1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/sjj 315\nI0214 01:40:33.691402       1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/x7x 309\nI0214 01:40:33.891652       1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/tkf 355\nI0214 01:40:34.091383       1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/2f9 568\nI0214 01:40:34.291452       1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/pc2 452\nI0214 01:40:34.491659       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/4bfg 535\n"
STEP: limiting log lines
Feb 14 01:40:34.677: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4382 --tail=1'
Feb 14 01:40:34.811: INFO: stderr: ""
Feb 14 01:40:34.811: INFO: stdout: "I0214 01:40:34.691684       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/2hb 248\n"
Feb 14 01:40:34.812: INFO: got output "I0214 01:40:34.691684       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/2hb 248\n"
STEP: limiting log bytes
Feb 14 01:40:34.812: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4382 --limit-bytes=1'
Feb 14 01:40:34.956: INFO: stderr: ""
Feb 14 01:40:34.956: INFO: stdout: "I"
Feb 14 01:40:34.956: INFO: got output "I"
STEP: exposing timestamps
Feb 14 01:40:34.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4382 --tail=1 --timestamps'
Feb 14 01:40:35.101: INFO: stderr: ""
Feb 14 01:40:35.102: INFO: stdout: "2020-02-14T01:40:35.091533418Z I0214 01:40:35.091302       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/45vl 596\n"
Feb 14 01:40:35.102: INFO: got output "2020-02-14T01:40:35.091533418Z I0214 01:40:35.091302       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/45vl 596\n"
STEP: restricting to a time range
Feb 14 01:40:37.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4382 --since=1s'
Feb 14 01:40:37.821: INFO: stderr: ""
Feb 14 01:40:37.821: INFO: stdout: "I0214 01:40:36.891388       1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/d7d 215\nI0214 01:40:37.091526       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/gvkg 345\nI0214 01:40:37.291519       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/rt6h 438\nI0214 01:40:37.491510       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/x6g 212\nI0214 01:40:37.691453       1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/l27 525\n"
Feb 14 01:40:37.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-4382 --since=24h'
Feb 14 01:40:37.966: INFO: stderr: ""
Feb 14 01:40:37.967: INFO: stdout: "I0214 01:40:33.091094       1 logs_generator.go:76] 0 POST /api/v1/namespaces/kube-system/pods/gl2 262\nI0214 01:40:33.291344       1 logs_generator.go:76] 1 POST /api/v1/namespaces/default/pods/jmc 305\nI0214 01:40:33.491554       1 logs_generator.go:76] 2 POST /api/v1/namespaces/default/pods/sjj 315\nI0214 01:40:33.691402       1 logs_generator.go:76] 3 GET /api/v1/namespaces/kube-system/pods/x7x 309\nI0214 01:40:33.891652       1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/tkf 355\nI0214 01:40:34.091383       1 logs_generator.go:76] 5 GET /api/v1/namespaces/ns/pods/2f9 568\nI0214 01:40:34.291452       1 logs_generator.go:76] 6 GET /api/v1/namespaces/ns/pods/pc2 452\nI0214 01:40:34.491659       1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/4bfg 535\nI0214 01:40:34.691684       1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/2hb 248\nI0214 01:40:34.891892       1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/7b5 585\nI0214 01:40:35.091302       1 logs_generator.go:76] 10 PUT /api/v1/namespaces/ns/pods/45vl 596\nI0214 01:40:35.291507       1 logs_generator.go:76] 11 GET /api/v1/namespaces/kube-system/pods/qr5n 368\nI0214 01:40:35.491484       1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/tmb 574\nI0214 01:40:35.691435       1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/vp5k 402\nI0214 01:40:35.891505       1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/d9pq 429\nI0214 01:40:36.091464       1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/57lb 253\nI0214 01:40:36.291429       1 logs_generator.go:76] 16 PUT /api/v1/namespaces/default/pods/x44 297\nI0214 01:40:36.491491       1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/f62 590\nI0214 01:40:36.691375       1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/7pdf 301\nI0214 01:40:36.891388       1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/d7d 215\nI0214 01:40:37.091526       1 logs_generator.go:76] 20 PUT /api/v1/namespaces/default/pods/gvkg 345\nI0214 01:40:37.291519       1 logs_generator.go:76] 21 PUT /api/v1/namespaces/default/pods/rt6h 438\nI0214 01:40:37.491510       1 logs_generator.go:76] 22 PUT /api/v1/namespaces/ns/pods/x6g 212\nI0214 01:40:37.691453       1 logs_generator.go:76] 23 POST /api/v1/namespaces/kube-system/pods/l27 525\nI0214 01:40:37.891413       1 logs_generator.go:76] 24 GET /api/v1/namespaces/ns/pods/vjdm 549\n"
[AfterEach] Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1472
Feb 14 01:40:37.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-4382'
Feb 14 01:40:52.411: INFO: stderr: ""
Feb 14 01:40:52.411: INFO: stdout: "pod \"logs-generator\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:40:52.412: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4382" for this suite.

• [SLOW TEST:28.922 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl logs
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1462
    should be able to retrieve and filter logs  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs  [Conformance]","total":280,"completed":278,"skipped":4553,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Feb 14 01:40:52.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Feb 14 01:41:08.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7073" for this suite.

• [SLOW TEST:16.244 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":280,"completed":279,"skipped":4555,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}
SSSSSSSSSSFeb 14 01:41:08.680: INFO: Running AfterSuite actions on all nodes
Feb 14 01:41:08.680: INFO: Running AfterSuite actions on node 1
Feb 14 01:41:08.680: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":280,"completed":279,"skipped":4565,"failed":1,"failures":["[sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]"]}


Summarizing 1 Failure:

[Fail] [sig-cli] Kubectl client Guestbook application [It] should create and stop a working application  [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:2339

Ran 280 of 4845 Specs in 7320.994 seconds
FAIL! -- 279 Passed | 1 Failed | 0 Pending | 4565 Skipped
--- FAIL: TestE2E (7321.19s)
FAIL