I0120 23:39:04.254998 8 test_context.go:416] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0120 23:39:04.255626 8 e2e.go:109] Starting e2e run "df456181-a889-47f1-b12e-0e629b75e9bc" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1579563542 - Will randomize all specs Will run 278 of 4841 specs Jan 20 23:39:04.305: INFO: >>> kubeConfig: /root/.kube/config Jan 20 23:39:04.311: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Jan 20 23:39:04.338: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Jan 20 23:39:04.376: INFO: 10 / 10 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Jan 20 23:39:04.376: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Jan 20 23:39:04.376: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Jan 20 23:39:04.401: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Jan 20 23:39:04.401: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Jan 20 23:39:04.401: INFO: e2e test version: v1.18.0-alpha.1.106+4f70231ce7736c Jan 20 23:39:04.403: INFO: kube-apiserver version: v1.17.0 Jan 20 23:39:04.403: INFO: >>> kubeConfig: /root/.kube/config Jan 20 23:39:04.410: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:39:04.410: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota Jan 20 23:39:04.582: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:39:11.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1156" for this suite. • [SLOW TEST:7.214 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":1,"skipped":14,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:39:11.626: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 20 23:39:11.758: INFO: Waiting up to 5m0s for pod "downwardapi-volume-23d2a78d-828a-4112-a143-210130b8e866" in namespace "projected-2718" to be "success or failure" Jan 20 23:39:11.782: INFO: Pod "downwardapi-volume-23d2a78d-828a-4112-a143-210130b8e866": Phase="Pending", Reason="", readiness=false. Elapsed: 23.804914ms Jan 20 23:39:13.795: INFO: Pod "downwardapi-volume-23d2a78d-828a-4112-a143-210130b8e866": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03719796s Jan 20 23:39:15.807: INFO: Pod "downwardapi-volume-23d2a78d-828a-4112-a143-210130b8e866": Phase="Pending", Reason="", readiness=false. Elapsed: 4.048858084s Jan 20 23:39:17.815: INFO: Pod "downwardapi-volume-23d2a78d-828a-4112-a143-210130b8e866": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056857487s Jan 20 23:39:19.832: INFO: Pod "downwardapi-volume-23d2a78d-828a-4112-a143-210130b8e866": Phase="Pending", Reason="", readiness=false. Elapsed: 8.073760538s Jan 20 23:39:21.846: INFO: Pod "downwardapi-volume-23d2a78d-828a-4112-a143-210130b8e866": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.08780814s STEP: Saw pod success Jan 20 23:39:21.846: INFO: Pod "downwardapi-volume-23d2a78d-828a-4112-a143-210130b8e866" satisfied condition "success or failure" Jan 20 23:39:21.852: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-23d2a78d-828a-4112-a143-210130b8e866 container client-container: STEP: delete the pod Jan 20 23:39:21.964: INFO: Waiting for pod downwardapi-volume-23d2a78d-828a-4112-a143-210130b8e866 to disappear Jan 20 23:39:21.972: INFO: Pod downwardapi-volume-23d2a78d-828a-4112-a143-210130b8e866 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:39:21.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2718" for this suite. • [SLOW TEST:10.393 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":2,"skipped":70,"failed":0} S ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:39:22.020: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1383 STEP: creating the pod Jan 20 23:39:22.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2461' Jan 20 23:39:24.608: INFO: stderr: "" Jan 20 23:39:24.608: INFO: stdout: "pod/pause created\n" Jan 20 23:39:24.608: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Jan 20 23:39:24.608: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2461" to be "running and ready" Jan 20 23:39:24.700: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 92.50387ms Jan 20 23:39:26.710: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.101726123s Jan 20 23:39:28.714: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.10661416s Jan 20 23:39:30.724: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11646439s Jan 20 23:39:32.742: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.133718817s Jan 20 23:39:32.742: INFO: Pod "pause" satisfied condition "running and ready" Jan 20 23:39:32.742: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: adding the label testing-label with value testing-label-value to a pod Jan 20 23:39:32.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2461' Jan 20 23:39:32.924: INFO: stderr: "" Jan 20 23:39:32.924: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Jan 20 23:39:32.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2461' Jan 20 23:39:33.082: INFO: stderr: "" Jan 20 23:39:33.082: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s testing-label-value\n" STEP: removing the label testing-label of a pod Jan 20 23:39:33.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2461' Jan 20 23:39:33.227: INFO: stderr: "" Jan 20 23:39:33.227: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Jan 20 23:39:33.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2461' Jan 20 23:39:33.354: INFO: stderr: "" Jan 20 23:39:33.354: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1390 STEP: using delete to clean up resources Jan 20 23:39:33.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2461' Jan 20 23:39:33.533: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Jan 20 23:39:33.534: INFO: stdout: "pod \"pause\" force deleted\n" Jan 20 23:39:33.534: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2461' Jan 20 23:39:33.750: INFO: stderr: "No resources found in kubectl-2461 namespace.\n" Jan 20 23:39:33.750: INFO: stdout: "" Jan 20 23:39:33.750: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2461 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Jan 20 23:39:33.865: INFO: stderr: "" Jan 20 23:39:33.865: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:39:33.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2461" for this suite. • [SLOW TEST:11.860 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1380 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":3,"skipped":71,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:39:33.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on node default medium Jan 20 23:39:33.991: INFO: Waiting up to 5m0s for pod "pod-cb7d7ab5-d515-42a3-a8ae-1c4849d05eef" in namespace "emptydir-7738" to be "success or failure" Jan 20 23:39:33.996: INFO: Pod "pod-cb7d7ab5-d515-42a3-a8ae-1c4849d05eef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.767481ms Jan 20 23:39:36.000: INFO: Pod "pod-cb7d7ab5-d515-42a3-a8ae-1c4849d05eef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008841812s Jan 20 23:39:38.005: INFO: Pod "pod-cb7d7ab5-d515-42a3-a8ae-1c4849d05eef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014072304s Jan 20 23:39:40.013: INFO: Pod "pod-cb7d7ab5-d515-42a3-a8ae-1c4849d05eef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.021181917s Jan 20 23:39:42.019: INFO: Pod "pod-cb7d7ab5-d515-42a3-a8ae-1c4849d05eef": Phase="Pending", Reason="", readiness=false. Elapsed: 8.027722113s Jan 20 23:39:44.029: INFO: Pod "pod-cb7d7ab5-d515-42a3-a8ae-1c4849d05eef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.038066519s STEP: Saw pod success Jan 20 23:39:44.030: INFO: Pod "pod-cb7d7ab5-d515-42a3-a8ae-1c4849d05eef" satisfied condition "success or failure" Jan 20 23:39:44.032: INFO: Trying to get logs from node jerma-node pod pod-cb7d7ab5-d515-42a3-a8ae-1c4849d05eef container test-container: STEP: delete the pod Jan 20 23:39:44.573: INFO: Waiting for pod pod-cb7d7ab5-d515-42a3-a8ae-1c4849d05eef to disappear Jan 20 23:39:44.583: INFO: Pod pod-cb7d7ab5-d515-42a3-a8ae-1c4849d05eef no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:39:44.583: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7738" for this suite. • [SLOW TEST:10.722 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":72,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:39:44.604: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:40:01.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8458" for this suite. • [SLOW TEST:16.612 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":5,"skipped":88,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:40:01.217: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 23:40:02.153: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 23:40:04.168: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160402, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160402, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160402, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160402, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 23:40:06.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160402, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160402, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160402, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160402, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 23:40:08.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160402, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160402, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160402, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160402, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 23:40:10.230: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160402, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160402, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160402, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160402, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 23:40:13.215: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API Jan 20 23:40:13.332: INFO: Waiting for webhook configuration to be ready... STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook Jan 20 23:40:21.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-1867 to-be-attached-pod -i -c=container1' Jan 20 23:40:21.724: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:40:21.732: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1867" for this suite. STEP: Destroying namespace "webhook-1867-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:20.667 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":6,"skipped":108,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:40:21.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-311 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a new StatefulSet Jan 20 23:40:22.183: INFO: Found 0 stateful pods, waiting for 3 Jan 20 23:40:32.193: INFO: Found 1 stateful pods, waiting for 3 Jan 20 23:40:42.200: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 20 23:40:42.200: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 20 23:40:42.200: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Jan 20 23:40:52.201: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Jan 20 23:40:52.201: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Jan 20 23:40:52.201: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true Jan 20 23:40:52.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-311 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 20 23:40:52.670: INFO: stderr: "I0120 23:40:52.417277 221 log.go:172] (0xc000aa4bb0) (0xc00073e6e0) Create stream\nI0120 23:40:52.417494 221 log.go:172] (0xc000aa4bb0) (0xc00073e6e0) Stream added, broadcasting: 1\nI0120 23:40:52.420266 221 log.go:172] (0xc000aa4bb0) Reply frame received for 1\nI0120 23:40:52.420319 221 log.go:172] (0xc000aa4bb0) (0xc00081a000) Create stream\nI0120 23:40:52.420330 221 log.go:172] (0xc000aa4bb0) (0xc00081a000) Stream added, broadcasting: 3\nI0120 23:40:52.421242 221 log.go:172] (0xc000aa4bb0) Reply frame received for 3\nI0120 23:40:52.421264 221 log.go:172] (0xc000aa4bb0) (0xc00073e780) Create stream\nI0120 23:40:52.421270 221 log.go:172] (0xc000aa4bb0) (0xc00073e780) Stream added, broadcasting: 5\nI0120 23:40:52.422819 221 log.go:172] (0xc000aa4bb0) Reply frame received for 5\nI0120 23:40:52.491036 221 log.go:172] (0xc000aa4bb0) Data frame received for 5\nI0120 23:40:52.491297 221 log.go:172] (0xc00073e780) (5) Data frame handling\nI0120 23:40:52.491369 221 log.go:172] (0xc00073e780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0120 23:40:52.512999 221 log.go:172] (0xc000aa4bb0) Data frame received for 3\nI0120 23:40:52.513131 221 log.go:172] (0xc00081a000) (3) Data frame handling\nI0120 23:40:52.513160 221 log.go:172] (0xc00081a000) (3) Data frame sent\nI0120 23:40:52.654368 221 log.go:172] (0xc000aa4bb0) Data frame received for 1\nI0120 23:40:52.654629 221 log.go:172] (0xc000aa4bb0) (0xc00081a000) Stream removed, broadcasting: 3\nI0120 23:40:52.654759 221 log.go:172] (0xc00073e6e0) (1) Data frame handling\nI0120 23:40:52.654781 221 log.go:172] (0xc000aa4bb0) (0xc00073e780) Stream removed, broadcasting: 5\nI0120 23:40:52.654793 221 log.go:172] (0xc00073e6e0) (1) Data frame sent\nI0120 23:40:52.654809 221 log.go:172] (0xc000aa4bb0) (0xc00073e6e0) Stream removed, broadcasting: 1\nI0120 23:40:52.654842 221 log.go:172] (0xc000aa4bb0) Go away received\nI0120 23:40:52.657077 221 log.go:172] (0xc000aa4bb0) (0xc00073e6e0) Stream removed, broadcasting: 1\nI0120 23:40:52.657095 221 log.go:172] (0xc000aa4bb0) (0xc00081a000) Stream removed, broadcasting: 3\nI0120 23:40:52.657099 221 log.go:172] (0xc000aa4bb0) (0xc00073e780) Stream removed, broadcasting: 5\n" Jan 20 23:40:52.670: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 20 23:40:52.670: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Jan 20 23:41:02.713: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order Jan 20 23:41:12.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-311 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 23:41:13.158: INFO: stderr: "I0120 23:41:13.004455 242 log.go:172] (0xc0000fa2c0) (0xc000934000) Create stream\nI0120 23:41:13.004693 242 log.go:172] (0xc0000fa2c0) (0xc000934000) Stream added, broadcasting: 1\nI0120 23:41:13.008604 242 log.go:172] (0xc0000fa2c0) Reply frame received for 1\nI0120 23:41:13.008669 242 log.go:172] (0xc0000fa2c0) (0xc000505720) Create stream\nI0120 23:41:13.008679 242 log.go:172] (0xc0000fa2c0) (0xc000505720) Stream added, broadcasting: 3\nI0120 23:41:13.010349 242 log.go:172] (0xc0000fa2c0) Reply frame received for 3\nI0120 23:41:13.010413 242 log.go:172] (0xc0000fa2c0) (0xc0009340a0) Create stream\nI0120 23:41:13.010437 242 log.go:172] (0xc0000fa2c0) (0xc0009340a0) Stream added, broadcasting: 5\nI0120 23:41:13.012868 242 log.go:172] (0xc0000fa2c0) Reply frame received for 5\nI0120 23:41:13.077784 242 log.go:172] (0xc0000fa2c0) Data frame received for 5\nI0120 23:41:13.077931 242 log.go:172] (0xc0009340a0) (5) Data frame handling\nI0120 23:41:13.077974 242 log.go:172] (0xc0009340a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0120 23:41:13.078068 242 log.go:172] (0xc0000fa2c0) Data frame received for 3\nI0120 23:41:13.078078 242 log.go:172] (0xc000505720) (3) Data frame handling\nI0120 23:41:13.078090 242 log.go:172] (0xc000505720) (3) Data frame sent\nI0120 23:41:13.144254 242 log.go:172] (0xc0000fa2c0) Data frame received for 1\nI0120 23:41:13.144534 242 log.go:172] (0xc0000fa2c0) (0xc000505720) Stream removed, broadcasting: 3\nI0120 23:41:13.144671 242 log.go:172] (0xc000934000) (1) Data frame handling\nI0120 23:41:13.144698 242 log.go:172] (0xc000934000) (1) Data frame sent\nI0120 23:41:13.144718 242 log.go:172] (0xc0000fa2c0) (0xc000934000) Stream removed, broadcasting: 1\nI0120 23:41:13.144745 242 log.go:172] (0xc0000fa2c0) (0xc0009340a0) Stream removed, broadcasting: 5\nI0120 23:41:13.144820 242 log.go:172] (0xc0000fa2c0) Go away received\nI0120 23:41:13.146462 242 log.go:172] (0xc0000fa2c0) (0xc000934000) Stream removed, broadcasting: 1\nI0120 23:41:13.146494 242 log.go:172] (0xc0000fa2c0) (0xc000505720) Stream removed, broadcasting: 3\nI0120 23:41:13.146506 242 log.go:172] (0xc0000fa2c0) (0xc0009340a0) Stream removed, broadcasting: 5\n" Jan 20 23:41:13.158: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 20 23:41:13.158: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 20 23:41:23.186: INFO: Waiting for StatefulSet statefulset-311/ss2 to complete update Jan 20 23:41:23.186: INFO: Waiting for Pod statefulset-311/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 20 23:41:23.186: INFO: Waiting for Pod statefulset-311/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 20 23:41:33.202: INFO: Waiting for StatefulSet statefulset-311/ss2 to complete update Jan 20 23:41:33.202: INFO: Waiting for Pod statefulset-311/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 20 23:41:33.202: INFO: Waiting for Pod statefulset-311/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 20 23:41:43.209: INFO: Waiting for StatefulSet statefulset-311/ss2 to complete update Jan 20 23:41:43.209: INFO: Waiting for Pod statefulset-311/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Jan 20 23:41:53.213: INFO: Waiting for StatefulSet statefulset-311/ss2 to complete update Jan 20 23:41:53.213: INFO: Waiting for Pod statefulset-311/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Rolling back to a previous revision Jan 20 23:42:03.202: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-311 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 20 23:42:03.686: INFO: stderr: "I0120 23:42:03.443500 264 log.go:172] (0xc0009311e0) (0xc0008be3c0) Create stream\nI0120 23:42:03.443798 264 log.go:172] (0xc0009311e0) (0xc0008be3c0) Stream added, broadcasting: 1\nI0120 23:42:03.457828 264 log.go:172] (0xc0009311e0) Reply frame received for 1\nI0120 23:42:03.457974 264 log.go:172] (0xc0009311e0) (0xc00039a6e0) Create stream\nI0120 23:42:03.457999 264 log.go:172] (0xc0009311e0) (0xc00039a6e0) Stream added, broadcasting: 3\nI0120 23:42:03.460144 264 log.go:172] (0xc0009311e0) Reply frame received for 3\nI0120 23:42:03.460210 264 log.go:172] (0xc0009311e0) (0xc000461860) Create stream\nI0120 23:42:03.460223 264 log.go:172] (0xc0009311e0) (0xc000461860) Stream added, broadcasting: 5\nI0120 23:42:03.461546 264 log.go:172] (0xc0009311e0) Reply frame received for 5\nI0120 23:42:03.543605 264 log.go:172] (0xc0009311e0) Data frame received for 5\nI0120 23:42:03.543729 264 log.go:172] (0xc000461860) (5) Data frame handling\nI0120 23:42:03.543790 264 log.go:172] (0xc000461860) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0120 23:42:03.580751 264 log.go:172] (0xc0009311e0) Data frame received for 3\nI0120 23:42:03.580815 264 log.go:172] (0xc00039a6e0) (3) Data frame handling\nI0120 23:42:03.580862 264 log.go:172] (0xc00039a6e0) (3) Data frame sent\nI0120 23:42:03.674660 264 log.go:172] (0xc0009311e0) Data frame received for 1\nI0120 23:42:03.674773 264 log.go:172] (0xc0009311e0) (0xc000461860) Stream removed, broadcasting: 5\nI0120 23:42:03.674889 264 log.go:172] (0xc0008be3c0) (1) Data frame handling\nI0120 23:42:03.674918 264 log.go:172] (0xc0009311e0) (0xc00039a6e0) Stream removed, broadcasting: 3\nI0120 23:42:03.674967 264 log.go:172] (0xc0008be3c0) (1) Data frame sent\nI0120 23:42:03.674981 264 log.go:172] (0xc0009311e0) (0xc0008be3c0) Stream removed, broadcasting: 1\nI0120 23:42:03.674997 264 log.go:172] (0xc0009311e0) Go away received\nI0120 23:42:03.676543 264 log.go:172] (0xc0009311e0) (0xc0008be3c0) Stream removed, broadcasting: 1\nI0120 23:42:03.676839 264 log.go:172] (0xc0009311e0) (0xc00039a6e0) Stream removed, broadcasting: 3\nI0120 23:42:03.676857 264 log.go:172] (0xc0009311e0) (0xc000461860) Stream removed, broadcasting: 5\n" Jan 20 23:42:03.687: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 20 23:42:03.687: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 20 23:42:13.732: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order Jan 20 23:42:23.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-311 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 23:42:24.273: INFO: stderr: "I0120 23:42:24.091761 284 log.go:172] (0xc000106420) (0xc000548000) Create stream\nI0120 23:42:24.092114 284 log.go:172] (0xc000106420) (0xc000548000) Stream added, broadcasting: 1\nI0120 23:42:24.096835 284 log.go:172] (0xc000106420) Reply frame received for 1\nI0120 23:42:24.096903 284 log.go:172] (0xc000106420) (0xc0005480a0) Create stream\nI0120 23:42:24.096912 284 log.go:172] (0xc000106420) (0xc0005480a0) Stream added, broadcasting: 3\nI0120 23:42:24.097896 284 log.go:172] (0xc000106420) Reply frame received for 3\nI0120 23:42:24.097927 284 log.go:172] (0xc000106420) (0xc00078a1e0) Create stream\nI0120 23:42:24.097938 284 log.go:172] (0xc000106420) (0xc00078a1e0) Stream added, broadcasting: 5\nI0120 23:42:24.099697 284 log.go:172] (0xc000106420) Reply frame received for 5\nI0120 23:42:24.187635 284 log.go:172] (0xc000106420) Data frame received for 5\nI0120 23:42:24.187890 284 log.go:172] (0xc00078a1e0) (5) Data frame handling\nI0120 23:42:24.187923 284 log.go:172] (0xc00078a1e0) (5) Data frame sent\nI0120 23:42:24.187935 284 log.go:172] (0xc000106420) Data frame received for 5\nI0120 23:42:24.187952 284 log.go:172] (0xc00078a1e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0120 23:42:24.188228 284 log.go:172] (0xc000106420) Data frame received for 3\nI0120 23:42:24.188548 284 log.go:172] (0xc0005480a0) (3) Data frame handling\nI0120 23:42:24.188586 284 log.go:172] (0xc0005480a0) (3) Data frame sent\nI0120 23:42:24.188706 284 log.go:172] (0xc00078a1e0) (5) Data frame sent\nI0120 23:42:24.261442 284 log.go:172] (0xc000106420) Data frame received for 1\nI0120 23:42:24.261501 284 log.go:172] (0xc000548000) (1) Data frame handling\nI0120 23:42:24.261519 284 log.go:172] (0xc000548000) (1) Data frame sent\nI0120 23:42:24.261881 284 log.go:172] (0xc000106420) (0xc000548000) Stream removed, broadcasting: 1\nI0120 23:42:24.263313 284 log.go:172] (0xc000106420) (0xc0005480a0) Stream removed, broadcasting: 3\nI0120 23:42:24.263418 284 log.go:172] (0xc000106420) (0xc00078a1e0) Stream removed, broadcasting: 5\nI0120 23:42:24.263440 284 log.go:172] (0xc000106420) Go away received\nI0120 23:42:24.263626 284 log.go:172] (0xc000106420) (0xc000548000) Stream removed, broadcasting: 1\nI0120 23:42:24.263652 284 log.go:172] (0xc000106420) (0xc0005480a0) Stream removed, broadcasting: 3\nI0120 23:42:24.263659 284 log.go:172] (0xc000106420) (0xc00078a1e0) Stream removed, broadcasting: 5\n" Jan 20 23:42:24.273: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 20 23:42:24.274: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 20 23:42:34.307: INFO: Waiting for StatefulSet statefulset-311/ss2 to complete update Jan 20 23:42:34.307: INFO: Waiting for Pod statefulset-311/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 20 23:42:34.307: INFO: Waiting for Pod statefulset-311/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 20 23:42:44.323: INFO: Waiting for StatefulSet statefulset-311/ss2 to complete update Jan 20 23:42:44.323: INFO: Waiting for Pod statefulset-311/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 20 23:42:44.323: INFO: Waiting for Pod statefulset-311/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 20 23:42:54.324: INFO: Waiting for StatefulSet statefulset-311/ss2 to complete update Jan 20 23:42:54.324: INFO: Waiting for Pod statefulset-311/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 20 23:43:04.322: INFO: Waiting for StatefulSet statefulset-311/ss2 to complete update Jan 20 23:43:04.322: INFO: Waiting for Pod statefulset-311/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 Jan 20 23:43:14.328: INFO: Waiting for StatefulSet statefulset-311/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 20 23:43:24.325: INFO: Deleting all statefulset in ns statefulset-311 Jan 20 23:43:24.331: INFO: Scaling statefulset ss2 to 0 Jan 20 23:43:44.400: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 23:43:44.404: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:43:44.482: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-311" for this suite. • [SLOW TEST:202.613 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":7,"skipped":112,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:43:44.498: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0666 on tmpfs Jan 20 23:43:44.642: INFO: Waiting up to 5m0s for pod "pod-e70f5be1-ab77-4a65-a385-e266bc92edcf" in namespace "emptydir-5385" to be "success or failure" Jan 20 23:43:44.663: INFO: Pod "pod-e70f5be1-ab77-4a65-a385-e266bc92edcf": Phase="Pending", Reason="", readiness=false. Elapsed: 20.559774ms Jan 20 23:43:46.669: INFO: Pod "pod-e70f5be1-ab77-4a65-a385-e266bc92edcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026975462s Jan 20 23:43:48.674: INFO: Pod "pod-e70f5be1-ab77-4a65-a385-e266bc92edcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032009294s Jan 20 23:43:50.698: INFO: Pod "pod-e70f5be1-ab77-4a65-a385-e266bc92edcf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055204183s Jan 20 23:43:52.747: INFO: Pod "pod-e70f5be1-ab77-4a65-a385-e266bc92edcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10426836s STEP: Saw pod success Jan 20 23:43:52.747: INFO: Pod "pod-e70f5be1-ab77-4a65-a385-e266bc92edcf" satisfied condition "success or failure" Jan 20 23:43:52.755: INFO: Trying to get logs from node jerma-node pod pod-e70f5be1-ab77-4a65-a385-e266bc92edcf container test-container: STEP: delete the pod Jan 20 23:43:52.839: INFO: Waiting for pod pod-e70f5be1-ab77-4a65-a385-e266bc92edcf to disappear Jan 20 23:43:52.883: INFO: Pod pod-e70f5be1-ab77-4a65-a385-e266bc92edcf no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:43:52.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5385" for this suite. • [SLOW TEST:8.395 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":8,"skipped":125,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:43:52.893: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 23:43:53.650: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:0, UpdatedReplicas:0, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160633, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160633, loc:(*time.Location)(0x7d7cf00)}}, Reason:"NewReplicaSetCreated", Message:"Created new replica set \"sample-webhook-deployment-5f65f8c764\""}, v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160633, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160633, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}}, CollisionCount:(*int32)(nil)} Jan 20 23:43:55.661: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160633, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160633, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160633, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160633, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 23:43:57.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160633, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160633, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160633, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160633, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 23:43:59.661: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160633, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160633, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160633, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160633, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 23:44:02.705: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:44:13.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-749" for this suite. STEP: Destroying namespace "webhook-749-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:20.460 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":9,"skipped":127,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:44:13.354: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 20 23:44:13.575: INFO: Waiting up to 5m0s for pod "pod-2b74ba85-b36f-4813-af8b-da618d9df86e" in namespace "emptydir-7815" to be "success or failure" Jan 20 23:44:13.622: INFO: Pod "pod-2b74ba85-b36f-4813-af8b-da618d9df86e": Phase="Pending", Reason="", readiness=false. Elapsed: 46.310574ms Jan 20 23:44:15.628: INFO: Pod "pod-2b74ba85-b36f-4813-af8b-da618d9df86e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052319155s Jan 20 23:44:17.636: INFO: Pod "pod-2b74ba85-b36f-4813-af8b-da618d9df86e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060904033s Jan 20 23:44:19.646: INFO: Pod "pod-2b74ba85-b36f-4813-af8b-da618d9df86e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070139353s Jan 20 23:44:21.658: INFO: Pod "pod-2b74ba85-b36f-4813-af8b-da618d9df86e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082531186s Jan 20 23:44:23.666: INFO: Pod "pod-2b74ba85-b36f-4813-af8b-da618d9df86e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.09031015s STEP: Saw pod success Jan 20 23:44:23.666: INFO: Pod "pod-2b74ba85-b36f-4813-af8b-da618d9df86e" satisfied condition "success or failure" Jan 20 23:44:23.672: INFO: Trying to get logs from node jerma-node pod pod-2b74ba85-b36f-4813-af8b-da618d9df86e container test-container: STEP: delete the pod Jan 20 23:44:23.754: INFO: Waiting for pod pod-2b74ba85-b36f-4813-af8b-da618d9df86e to disappear Jan 20 23:44:23.765: INFO: Pod pod-2b74ba85-b36f-4813-af8b-da618d9df86e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:44:23.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7815" for this suite. • [SLOW TEST:10.478 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":155,"failed":0} SSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:44:23.833: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating a service nodeport-service with the type=NodePort in namespace services-4022 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4022 STEP: creating replication controller externalsvc in namespace services-4022 I0120 23:44:24.297381 8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4022, replica count: 2 I0120 23:44:27.348406 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 23:44:30.348927 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 23:44:33.349392 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0120 23:44:36.349866 8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName Jan 20 23:44:36.434: INFO: Creating new exec pod Jan 20 23:44:44.474: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4022 execpodjfjt8 -- /bin/sh -x -c nslookup nodeport-service' Jan 20 23:44:44.899: INFO: stderr: "I0120 23:44:44.717043 304 log.go:172] (0xc00091c000) (0xc00047d4a0) Create stream\nI0120 23:44:44.717196 304 log.go:172] (0xc00091c000) (0xc00047d4a0) Stream added, broadcasting: 1\nI0120 23:44:44.721572 304 log.go:172] (0xc00091c000) Reply frame received for 1\nI0120 23:44:44.721606 304 log.go:172] (0xc00091c000) (0xc0008b8000) Create stream\nI0120 23:44:44.721616 304 log.go:172] (0xc00091c000) (0xc0008b8000) Stream added, broadcasting: 3\nI0120 23:44:44.723152 304 log.go:172] (0xc00091c000) Reply frame received for 3\nI0120 23:44:44.723181 304 log.go:172] (0xc00091c000) (0xc0006a5b80) Create stream\nI0120 23:44:44.723190 304 log.go:172] (0xc00091c000) (0xc0006a5b80) Stream added, broadcasting: 5\nI0120 23:44:44.724396 304 log.go:172] (0xc00091c000) Reply frame received for 5\nI0120 23:44:44.787584 304 log.go:172] (0xc00091c000) Data frame received for 5\nI0120 23:44:44.787636 304 log.go:172] (0xc0006a5b80) (5) Data frame handling\nI0120 23:44:44.787663 304 log.go:172] (0xc0006a5b80) (5) Data frame sent\n+ nslookup nodeport-service\nI0120 23:44:44.802097 304 log.go:172] (0xc00091c000) Data frame received for 3\nI0120 23:44:44.802169 304 log.go:172] (0xc0008b8000) (3) Data frame handling\nI0120 23:44:44.802200 304 log.go:172] (0xc0008b8000) (3) Data frame sent\nI0120 23:44:44.805058 304 log.go:172] (0xc00091c000) Data frame received for 3\nI0120 23:44:44.805074 304 log.go:172] (0xc0008b8000) (3) Data frame handling\nI0120 23:44:44.805090 304 log.go:172] (0xc0008b8000) (3) Data frame sent\nI0120 23:44:44.889981 304 log.go:172] (0xc00091c000) Data frame received for 1\nI0120 23:44:44.890108 304 log.go:172] (0xc00091c000) (0xc0008b8000) Stream removed, broadcasting: 3\nI0120 23:44:44.890188 304 log.go:172] (0xc00047d4a0) (1) Data frame handling\nI0120 23:44:44.890225 304 log.go:172] (0xc00047d4a0) (1) Data frame sent\nI0120 23:44:44.890230 304 log.go:172] (0xc00091c000) (0xc00047d4a0) Stream removed, broadcasting: 1\nI0120 23:44:44.890292 304 log.go:172] (0xc00091c000) (0xc0006a5b80) Stream removed, broadcasting: 5\nI0120 23:44:44.890347 304 log.go:172] (0xc00091c000) Go away received\nI0120 23:44:44.891122 304 log.go:172] (0xc00091c000) (0xc00047d4a0) Stream removed, broadcasting: 1\nI0120 23:44:44.891131 304 log.go:172] (0xc00091c000) (0xc0008b8000) Stream removed, broadcasting: 3\nI0120 23:44:44.891135 304 log.go:172] (0xc00091c000) (0xc0006a5b80) Stream removed, broadcasting: 5\n" Jan 20 23:44:44.899: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4022.svc.cluster.local\tcanonical name = externalsvc.services-4022.svc.cluster.local.\nName:\texternalsvc.services-4022.svc.cluster.local\nAddress: 10.96.241.222\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4022, will wait for the garbage collector to delete the pods Jan 20 23:44:44.962: INFO: Deleting ReplicationController externalsvc took: 8.716939ms Jan 20 23:44:45.363: INFO: Terminating ReplicationController externalsvc pods took: 400.952562ms Jan 20 23:45:03.203: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:45:03.238: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4022" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691 • [SLOW TEST:39.418 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":11,"skipped":158,"failed":0} SSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:45:03.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:150 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:45:03.425: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7212" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":12,"skipped":166,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:45:03.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6999.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6999.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 20 23:45:21.701: INFO: DNS probes using dns-6999/dns-test-94ec333b-00d4-4e21-bf02-48a6384823cf succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:45:21.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-6999" for this suite. • [SLOW TEST:18.385 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":13,"skipped":180,"failed":0} SSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:45:21.838: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name configmap-test-upd-156ddd0d-fd0c-4875-bc87-1b8d852de502 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-156ddd0d-fd0c-4875-bc87-1b8d852de502 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:46:42.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5975" for this suite. • [SLOW TEST:81.095 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":183,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:46:42.933: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 20 23:46:43.055: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dd9e246f-f66c-4fe3-8f41-2b25766bb5c6" in namespace "projected-8715" to be "success or failure" Jan 20 23:46:43.118: INFO: Pod "downwardapi-volume-dd9e246f-f66c-4fe3-8f41-2b25766bb5c6": Phase="Pending", Reason="", readiness=false. Elapsed: 62.895176ms Jan 20 23:46:45.123: INFO: Pod "downwardapi-volume-dd9e246f-f66c-4fe3-8f41-2b25766bb5c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06807511s Jan 20 23:46:47.131: INFO: Pod "downwardapi-volume-dd9e246f-f66c-4fe3-8f41-2b25766bb5c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075811435s Jan 20 23:46:49.139: INFO: Pod "downwardapi-volume-dd9e246f-f66c-4fe3-8f41-2b25766bb5c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083484924s Jan 20 23:46:51.148: INFO: Pod "downwardapi-volume-dd9e246f-f66c-4fe3-8f41-2b25766bb5c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.092768344s Jan 20 23:46:53.154: INFO: Pod "downwardapi-volume-dd9e246f-f66c-4fe3-8f41-2b25766bb5c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.099047945s STEP: Saw pod success Jan 20 23:46:53.155: INFO: Pod "downwardapi-volume-dd9e246f-f66c-4fe3-8f41-2b25766bb5c6" satisfied condition "success or failure" Jan 20 23:46:53.158: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-dd9e246f-f66c-4fe3-8f41-2b25766bb5c6 container client-container: STEP: delete the pod Jan 20 23:46:53.267: INFO: Waiting for pod downwardapi-volume-dd9e246f-f66c-4fe3-8f41-2b25766bb5c6 to disappear Jan 20 23:46:53.273: INFO: Pod downwardapi-volume-dd9e246f-f66c-4fe3-8f41-2b25766bb5c6 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:46:53.274: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8715" for this suite. • [SLOW TEST:10.356 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":190,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:46:53.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3243.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-3243.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3243.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-3243.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-3243.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3243.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Jan 20 23:47:05.612: INFO: DNS probes using dns-3243/dns-test-7a651929-0919-410b-ae97-45090c398083 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:47:05.674: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3243" for this suite. • [SLOW TEST:12.449 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":16,"skipped":198,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:47:05.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating secret secrets-8953/secret-test-279dc350-6b73-4af7-9afd-fad0b4bc14af STEP: Creating a pod to test consume secrets Jan 20 23:47:06.000: INFO: Waiting up to 5m0s for pod "pod-configmaps-a00faf04-1d4d-4760-8a3f-ea6787561ead" in namespace "secrets-8953" to be "success or failure" Jan 20 23:47:06.009: INFO: Pod "pod-configmaps-a00faf04-1d4d-4760-8a3f-ea6787561ead": Phase="Pending", Reason="", readiness=false. Elapsed: 9.399677ms Jan 20 23:47:08.014: INFO: Pod "pod-configmaps-a00faf04-1d4d-4760-8a3f-ea6787561ead": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014485903s Jan 20 23:47:10.037: INFO: Pod "pod-configmaps-a00faf04-1d4d-4760-8a3f-ea6787561ead": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036899795s Jan 20 23:47:12.047: INFO: Pod "pod-configmaps-a00faf04-1d4d-4760-8a3f-ea6787561ead": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047387959s Jan 20 23:47:14.052: INFO: Pod "pod-configmaps-a00faf04-1d4d-4760-8a3f-ea6787561ead": Phase="Pending", Reason="", readiness=false. Elapsed: 8.052339723s Jan 20 23:47:16.060: INFO: Pod "pod-configmaps-a00faf04-1d4d-4760-8a3f-ea6787561ead": Phase="Pending", Reason="", readiness=false. Elapsed: 10.06014429s Jan 20 23:47:18.069: INFO: Pod "pod-configmaps-a00faf04-1d4d-4760-8a3f-ea6787561ead": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.068842204s STEP: Saw pod success Jan 20 23:47:18.069: INFO: Pod "pod-configmaps-a00faf04-1d4d-4760-8a3f-ea6787561ead" satisfied condition "success or failure" Jan 20 23:47:18.073: INFO: Trying to get logs from node jerma-node pod pod-configmaps-a00faf04-1d4d-4760-8a3f-ea6787561ead container env-test: STEP: delete the pod Jan 20 23:47:18.174: INFO: Waiting for pod pod-configmaps-a00faf04-1d4d-4760-8a3f-ea6787561ead to disappear Jan 20 23:47:18.196: INFO: Pod pod-configmaps-a00faf04-1d4d-4760-8a3f-ea6787561ead no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:47:18.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8953" for this suite. • [SLOW TEST:12.472 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":212,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:47:18.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:47:29.512: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4614" for this suite. • [SLOW TEST:11.316 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":18,"skipped":230,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:47:29.531: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:331 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the initial replication controller Jan 20 23:47:29.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4800' Jan 20 23:47:30.940: INFO: stderr: "" Jan 20 23:47:30.940: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 20 23:47:30.941: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4800' Jan 20 23:47:31.124: INFO: stderr: "" Jan 20 23:47:31.125: INFO: stdout: "update-demo-nautilus-5j7qs update-demo-nautilus-jsk49 " Jan 20 23:47:31.125: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j7qs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4800' Jan 20 23:47:31.260: INFO: stderr: "" Jan 20 23:47:31.260: INFO: stdout: "" Jan 20 23:47:31.260: INFO: update-demo-nautilus-5j7qs is created but not running Jan 20 23:47:36.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4800' Jan 20 23:47:36.923: INFO: stderr: "" Jan 20 23:47:36.923: INFO: stdout: "update-demo-nautilus-5j7qs update-demo-nautilus-jsk49 " Jan 20 23:47:36.924: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j7qs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4800' Jan 20 23:47:37.728: INFO: stderr: "" Jan 20 23:47:37.729: INFO: stdout: "" Jan 20 23:47:37.729: INFO: update-demo-nautilus-5j7qs is created but not running Jan 20 23:47:42.731: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4800' Jan 20 23:47:42.883: INFO: stderr: "" Jan 20 23:47:42.883: INFO: stdout: "update-demo-nautilus-5j7qs update-demo-nautilus-jsk49 " Jan 20 23:47:42.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j7qs -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4800' Jan 20 23:47:43.014: INFO: stderr: "" Jan 20 23:47:43.014: INFO: stdout: "true" Jan 20 23:47:43.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5j7qs -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4800' Jan 20 23:47:43.101: INFO: stderr: "" Jan 20 23:47:43.101: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 20 23:47:43.101: INFO: validating pod update-demo-nautilus-5j7qs Jan 20 23:47:43.107: INFO: got data: { "image": "nautilus.jpg" } Jan 20 23:47:43.107: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 20 23:47:43.107: INFO: update-demo-nautilus-5j7qs is verified up and running Jan 20 23:47:43.107: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsk49 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4800' Jan 20 23:47:43.206: INFO: stderr: "" Jan 20 23:47:43.206: INFO: stdout: "true" Jan 20 23:47:43.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jsk49 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4800' Jan 20 23:47:43.296: INFO: stderr: "" Jan 20 23:47:43.296: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" Jan 20 23:47:43.296: INFO: validating pod update-demo-nautilus-jsk49 Jan 20 23:47:43.300: INFO: got data: { "image": "nautilus.jpg" } Jan 20 23:47:43.300: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . Jan 20 23:47:43.300: INFO: update-demo-nautilus-jsk49 is verified up and running STEP: rolling-update to new replication controller Jan 20 23:47:43.304: INFO: scanned /root for discovery docs: Jan 20 23:47:43.304: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4800' Jan 20 23:48:12.725: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" Jan 20 23:48:12.725: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. Jan 20 23:48:12.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4800' Jan 20 23:48:12.943: INFO: stderr: "" Jan 20 23:48:12.943: INFO: stdout: "update-demo-kitten-b7nff update-demo-kitten-mt9hd update-demo-nautilus-jsk49 " STEP: Replicas for name=update-demo: expected=2 actual=3 Jan 20 23:48:17.944: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4800' Jan 20 23:48:18.083: INFO: stderr: "" Jan 20 23:48:18.083: INFO: stdout: "update-demo-kitten-b7nff update-demo-kitten-mt9hd " Jan 20 23:48:18.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-b7nff -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4800' Jan 20 23:48:18.258: INFO: stderr: "" Jan 20 23:48:18.258: INFO: stdout: "true" Jan 20 23:48:18.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-b7nff -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4800' Jan 20 23:48:18.355: INFO: stderr: "" Jan 20 23:48:18.355: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 20 23:48:18.355: INFO: validating pod update-demo-kitten-b7nff Jan 20 23:48:18.363: INFO: got data: { "image": "kitten.jpg" } Jan 20 23:48:18.363: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 20 23:48:18.363: INFO: update-demo-kitten-b7nff is verified up and running Jan 20 23:48:18.363: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mt9hd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4800' Jan 20 23:48:18.481: INFO: stderr: "" Jan 20 23:48:18.481: INFO: stdout: "true" Jan 20 23:48:18.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-mt9hd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4800' Jan 20 23:48:18.596: INFO: stderr: "" Jan 20 23:48:18.596: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" Jan 20 23:48:18.596: INFO: validating pod update-demo-kitten-mt9hd Jan 20 23:48:18.602: INFO: got data: { "image": "kitten.jpg" } Jan 20 23:48:18.602: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . Jan 20 23:48:18.602: INFO: update-demo-kitten-mt9hd is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:48:18.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4800" for this suite. • [SLOW TEST:49.079 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":19,"skipped":330,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:48:18.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:43 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice Jan 20 23:48:33.968: INFO: start=2020-01-20 23:48:28.941345759 +0000 UTC m=+565.977641025, now=2020-01-20 23:48:33.968799391 +0000 UTC m=+571.005094658, kubelet pod: {"metadata":{"name":"pod-submit-remove-0be9dd38-c2a6-41c5-a86d-b7903904bc31","namespace":"pods-9039","selfLink":"/api/v1/namespaces/pods-9039/pods/pod-submit-remove-0be9dd38-c2a6-41c5-a86d-b7903904bc31","uid":"e2d68579-9257-4ed1-92fa-58d15575eab3","resourceVersion":"3285928","creationTimestamp":"2020-01-20T23:48:18Z","deletionTimestamp":"2020-01-20T23:48:58Z","deletionGracePeriodSeconds":30,"labels":{"name":"foo","time":"703150308"},"annotations":{"kubernetes.io/config.seen":"2020-01-20T23:48:18.734424468Z","kubernetes.io/config.source":"api"}},"spec":{"volumes":[{"name":"default-token-c7xmp","secret":{"secretName":"default-token-c7xmp","defaultMode":420}}],"containers":[{"name":"agnhost","image":"gcr.io/kubernetes-e2e-test-images/agnhost:2.8","args":["pause"],"resources":{},"volumeMounts":[{"name":"default-token-c7xmp","readOnly":true,"mountPath":"/var/run/secrets/kubernetes.io/serviceaccount"}],"terminationMessagePath":"/dev/termination-log","terminationMessagePolicy":"File","imagePullPolicy":"IfNotPresent"}],"restartPolicy":"Always","terminationGracePeriodSeconds":30,"dnsPolicy":"ClusterFirst","serviceAccountName":"default","serviceAccount":"default","nodeName":"jerma-node","securityContext":{},"schedulerName":"default-scheduler","tolerations":[{"key":"node.kubernetes.io/not-ready","operator":"Exists","effect":"NoExecute","tolerationSeconds":300},{"key":"node.kubernetes.io/unreachable","operator":"Exists","effect":"NoExecute","tolerationSeconds":300}],"priority":0,"enableServiceLinks":true},"status":{"phase":"Running","conditions":[{"type":"Initialized","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-01-20T23:48:18Z"},{"type":"Ready","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-01-20T23:48:28Z"},{"type":"ContainersReady","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-01-20T23:48:28Z"},{"type":"PodScheduled","status":"True","lastProbeTime":null,"lastTransitionTime":"2020-01-20T23:48:18Z"}],"hostIP":"10.96.2.250","podIP":"10.44.0.1","podIPs":[{"ip":"10.44.0.1"}],"startTime":"2020-01-20T23:48:18Z","containerStatuses":[{"name":"agnhost","state":{"running":{"startedAt":"2020-01-20T23:48:25Z"}},"lastState":{},"ready":true,"restartCount":0,"image":"gcr.io/kubernetes-e2e-test-images/agnhost:2.8","imageID":"docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5","containerID":"docker://0a798e6db3ae3095bcde4e6be502e1e7ac6af57eb2253152f3dbb78713f7000a","started":true}],"qosClass":"BestEffort"}} Jan 20 23:48:38.961: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:48:38.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-9039" for this suite. • [SLOW TEST:20.370 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":20,"skipped":359,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:48:38.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 20 23:48:39.091: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties Jan 20 23:48:42.393: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5267 create -f -' Jan 20 23:48:45.410: INFO: stderr: "" Jan 20 23:48:45.410: INFO: stdout: "e2e-test-crd-publish-openapi-8525-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 20 23:48:45.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5267 delete e2e-test-crd-publish-openapi-8525-crds test-cr' Jan 20 23:48:45.619: INFO: stderr: "" Jan 20 23:48:45.619: INFO: stdout: "e2e-test-crd-publish-openapi-8525-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" Jan 20 23:48:45.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5267 apply -f -' Jan 20 23:48:46.072: INFO: stderr: "" Jan 20 23:48:46.072: INFO: stdout: "e2e-test-crd-publish-openapi-8525-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" Jan 20 23:48:46.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5267 delete e2e-test-crd-publish-openapi-8525-crds test-cr' Jan 20 23:48:46.199: INFO: stderr: "" Jan 20 23:48:46.199: INFO: stdout: "e2e-test-crd-publish-openapi-8525-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema Jan 20 23:48:46.199: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8525-crds' Jan 20 23:48:46.714: INFO: stderr: "" Jan 20 23:48:46.714: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8525-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:48:50.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5267" for this suite. • [SLOW TEST:11.263 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":21,"skipped":383,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:48:50.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:48:50.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2835" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":22,"skipped":393,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:48:50.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test override all Jan 20 23:48:50.673: INFO: Waiting up to 5m0s for pod "client-containers-403c6543-6de3-431a-abbb-5d4102030993" in namespace "containers-7679" to be "success or failure" Jan 20 23:48:50.680: INFO: Pod "client-containers-403c6543-6de3-431a-abbb-5d4102030993": Phase="Pending", Reason="", readiness=false. Elapsed: 7.450774ms Jan 20 23:48:52.685: INFO: Pod "client-containers-403c6543-6de3-431a-abbb-5d4102030993": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012089631s Jan 20 23:48:54.693: INFO: Pod "client-containers-403c6543-6de3-431a-abbb-5d4102030993": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020523109s Jan 20 23:48:56.703: INFO: Pod "client-containers-403c6543-6de3-431a-abbb-5d4102030993": Phase="Pending", Reason="", readiness=false. Elapsed: 6.030598204s Jan 20 23:48:58.710: INFO: Pod "client-containers-403c6543-6de3-431a-abbb-5d4102030993": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.037299648s STEP: Saw pod success Jan 20 23:48:58.710: INFO: Pod "client-containers-403c6543-6de3-431a-abbb-5d4102030993" satisfied condition "success or failure" Jan 20 23:48:58.714: INFO: Trying to get logs from node jerma-node pod client-containers-403c6543-6de3-431a-abbb-5d4102030993 container test-container: STEP: delete the pod Jan 20 23:48:58.766: INFO: Waiting for pod client-containers-403c6543-6de3-431a-abbb-5d4102030993 to disappear Jan 20 23:48:58.799: INFO: Pod client-containers-403c6543-6de3-431a-abbb-5d4102030993 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:48:58.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-7679" for this suite. • [SLOW TEST:8.299 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:48:58.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0777 on node default medium Jan 20 23:48:58.990: INFO: Waiting up to 5m0s for pod "pod-fd10b5eb-48a5-425d-b6e3-8da532395e88" in namespace "emptydir-6219" to be "success or failure" Jan 20 23:48:58.996: INFO: Pod "pod-fd10b5eb-48a5-425d-b6e3-8da532395e88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.251113ms Jan 20 23:49:01.004: INFO: Pod "pod-fd10b5eb-48a5-425d-b6e3-8da532395e88": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014221931s Jan 20 23:49:03.013: INFO: Pod "pod-fd10b5eb-48a5-425d-b6e3-8da532395e88": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022753378s Jan 20 23:49:05.021: INFO: Pod "pod-fd10b5eb-48a5-425d-b6e3-8da532395e88": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031275132s Jan 20 23:49:07.030: INFO: Pod "pod-fd10b5eb-48a5-425d-b6e3-8da532395e88": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.040522464s STEP: Saw pod success Jan 20 23:49:07.031: INFO: Pod "pod-fd10b5eb-48a5-425d-b6e3-8da532395e88" satisfied condition "success or failure" Jan 20 23:49:07.071: INFO: Trying to get logs from node jerma-node pod pod-fd10b5eb-48a5-425d-b6e3-8da532395e88 container test-container: STEP: delete the pod Jan 20 23:49:07.140: INFO: Waiting for pod pod-fd10b5eb-48a5-425d-b6e3-8da532395e88 to disappear Jan 20 23:49:07.157: INFO: Pod pod-fd10b5eb-48a5-425d-b6e3-8da532395e88 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:49:07.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6219" for this suite. • [SLOW TEST:8.382 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":465,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:49:07.207: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-9317 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 20 23:49:07.364: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 20 23:49:41.479: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-9317 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 23:49:41.479: INFO: >>> kubeConfig: /root/.kube/config I0120 23:49:41.535573 8 log.go:172] (0xc005270580) (0xc001733d60) Create stream I0120 23:49:41.535722 8 log.go:172] (0xc005270580) (0xc001733d60) Stream added, broadcasting: 1 I0120 23:49:41.541444 8 log.go:172] (0xc005270580) Reply frame received for 1 I0120 23:49:41.541552 8 log.go:172] (0xc005270580) (0xc001733e00) Create stream I0120 23:49:41.541585 8 log.go:172] (0xc005270580) (0xc001733e00) Stream added, broadcasting: 3 I0120 23:49:41.545268 8 log.go:172] (0xc005270580) Reply frame received for 3 I0120 23:49:41.545348 8 log.go:172] (0xc005270580) (0xc0016d0000) Create stream I0120 23:49:41.545377 8 log.go:172] (0xc005270580) (0xc0016d0000) Stream added, broadcasting: 5 I0120 23:49:41.549018 8 log.go:172] (0xc005270580) Reply frame received for 5 I0120 23:49:41.654025 8 log.go:172] (0xc005270580) Data frame received for 3 I0120 23:49:41.654106 8 log.go:172] (0xc001733e00) (3) Data frame handling I0120 23:49:41.654141 8 log.go:172] (0xc001733e00) (3) Data frame sent I0120 23:49:41.750342 8 log.go:172] (0xc005270580) (0xc001733e00) Stream removed, broadcasting: 3 I0120 23:49:41.750803 8 log.go:172] (0xc005270580) Data frame received for 1 I0120 23:49:41.750893 8 log.go:172] (0xc001733d60) (1) Data frame handling I0120 23:49:41.750948 8 log.go:172] (0xc001733d60) (1) Data frame sent I0120 23:49:41.751020 8 log.go:172] (0xc005270580) (0xc001733d60) Stream removed, broadcasting: 1 I0120 23:49:41.751102 8 log.go:172] (0xc005270580) (0xc0016d0000) Stream removed, broadcasting: 5 I0120 23:49:41.751184 8 log.go:172] (0xc005270580) Go away received I0120 23:49:41.752330 8 log.go:172] (0xc005270580) (0xc001733d60) Stream removed, broadcasting: 1 I0120 23:49:41.752350 8 log.go:172] (0xc005270580) (0xc001733e00) Stream removed, broadcasting: 3 I0120 23:49:41.752366 8 log.go:172] (0xc005270580) (0xc0016d0000) Stream removed, broadcasting: 5 Jan 20 23:49:41.752: INFO: Waiting for responses: map[] Jan 20 23:49:41.757: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostname&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-9317 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 23:49:41.757: INFO: >>> kubeConfig: /root/.kube/config I0120 23:49:41.803262 8 log.go:172] (0xc005270b00) (0xc00297c0a0) Create stream I0120 23:49:41.803598 8 log.go:172] (0xc005270b00) (0xc00297c0a0) Stream added, broadcasting: 1 I0120 23:49:41.808624 8 log.go:172] (0xc005270b00) Reply frame received for 1 I0120 23:49:41.808766 8 log.go:172] (0xc005270b00) (0xc0016d0320) Create stream I0120 23:49:41.808789 8 log.go:172] (0xc005270b00) (0xc0016d0320) Stream added, broadcasting: 3 I0120 23:49:41.810676 8 log.go:172] (0xc005270b00) Reply frame received for 3 I0120 23:49:41.810771 8 log.go:172] (0xc005270b00) (0xc00297c1e0) Create stream I0120 23:49:41.810786 8 log.go:172] (0xc005270b00) (0xc00297c1e0) Stream added, broadcasting: 5 I0120 23:49:41.812843 8 log.go:172] (0xc005270b00) Reply frame received for 5 I0120 23:49:41.896146 8 log.go:172] (0xc005270b00) Data frame received for 3 I0120 23:49:41.896270 8 log.go:172] (0xc0016d0320) (3) Data frame handling I0120 23:49:41.896308 8 log.go:172] (0xc0016d0320) (3) Data frame sent I0120 23:49:41.992164 8 log.go:172] (0xc005270b00) Data frame received for 1 I0120 23:49:41.992760 8 log.go:172] (0xc005270b00) (0xc0016d0320) Stream removed, broadcasting: 3 I0120 23:49:41.992975 8 log.go:172] (0xc00297c0a0) (1) Data frame handling I0120 23:49:41.993034 8 log.go:172] (0xc00297c0a0) (1) Data frame sent I0120 23:49:41.993077 8 log.go:172] (0xc005270b00) (0xc00297c1e0) Stream removed, broadcasting: 5 I0120 23:49:41.993110 8 log.go:172] (0xc005270b00) (0xc00297c0a0) Stream removed, broadcasting: 1 I0120 23:49:41.993143 8 log.go:172] (0xc005270b00) Go away received I0120 23:49:41.993555 8 log.go:172] (0xc005270b00) (0xc00297c0a0) Stream removed, broadcasting: 1 I0120 23:49:41.993596 8 log.go:172] (0xc005270b00) (0xc0016d0320) Stream removed, broadcasting: 3 I0120 23:49:41.993660 8 log.go:172] (0xc005270b00) (0xc00297c1e0) Stream removed, broadcasting: 5 Jan 20 23:49:41.994: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:49:41.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-9317" for this suite. • [SLOW TEST:34.835 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [NodeConformance] [Conformance]","total":278,"completed":25,"skipped":466,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:49:42.043: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Jan 20 23:49:43.436: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Jan 20 23:49:45.455: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 23:49:48.673: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 23:49:49.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 23:49:51.529: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} Jan 20 23:49:53.465: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715160983, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Jan 20 23:49:56.583: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:49:57.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4833" for this suite. STEP: Destroying namespace "webhook-4833-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101 • [SLOW TEST:15.273 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":26,"skipped":470,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:49:57.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: executing a command with run --rm and attach with stdin Jan 20 23:49:57.405: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9986 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' Jan 20 23:50:07.437: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0120 23:50:06.430383 778 log.go:172] (0xc0008ec6e0) (0xc000689b80) Create stream\nI0120 23:50:06.430790 778 log.go:172] (0xc0008ec6e0) (0xc000689b80) Stream added, broadcasting: 1\nI0120 23:50:06.436364 778 log.go:172] (0xc0008ec6e0) Reply frame received for 1\nI0120 23:50:06.436592 778 log.go:172] (0xc0008ec6e0) (0xc000689c20) Create stream\nI0120 23:50:06.436605 778 log.go:172] (0xc0008ec6e0) (0xc000689c20) Stream added, broadcasting: 3\nI0120 23:50:06.439047 778 log.go:172] (0xc0008ec6e0) Reply frame received for 3\nI0120 23:50:06.439183 778 log.go:172] (0xc0008ec6e0) (0xc0008da000) Create stream\nI0120 23:50:06.439219 778 log.go:172] (0xc0008ec6e0) (0xc0008da000) Stream added, broadcasting: 5\nI0120 23:50:06.442086 778 log.go:172] (0xc0008ec6e0) Reply frame received for 5\nI0120 23:50:06.442189 778 log.go:172] (0xc0008ec6e0) (0xc000a3a140) Create stream\nI0120 23:50:06.442207 778 log.go:172] (0xc0008ec6e0) (0xc000a3a140) Stream added, broadcasting: 7\nI0120 23:50:06.445119 778 log.go:172] (0xc0008ec6e0) Reply frame received for 7\nI0120 23:50:06.445677 778 log.go:172] (0xc000689c20) (3) Writing data frame\nI0120 23:50:06.445918 778 log.go:172] (0xc000689c20) (3) Writing data frame\nI0120 23:50:06.453659 778 log.go:172] (0xc0008ec6e0) Data frame received for 5\nI0120 23:50:06.453699 778 log.go:172] (0xc0008da000) (5) Data frame handling\nI0120 23:50:06.453744 778 log.go:172] (0xc0008da000) (5) Data frame sent\nI0120 23:50:06.454708 778 log.go:172] (0xc0008ec6e0) Data frame received for 5\nI0120 23:50:06.454732 778 log.go:172] (0xc0008da000) (5) Data frame handling\nI0120 23:50:06.454750 778 log.go:172] (0xc0008da000) (5) Data frame sent\nI0120 23:50:07.402256 778 log.go:172] (0xc0008ec6e0) Data frame received for 1\nI0120 23:50:07.402383 778 log.go:172] (0xc0008ec6e0) (0xc000a3a140) Stream removed, broadcasting: 7\nI0120 23:50:07.402448 778 log.go:172] (0xc000689b80) (1) Data frame handling\nI0120 23:50:07.402476 778 log.go:172] (0xc000689b80) (1) Data frame sent\nI0120 23:50:07.402529 778 log.go:172] (0xc0008ec6e0) (0xc000689c20) Stream removed, broadcasting: 3\nI0120 23:50:07.402579 778 log.go:172] (0xc0008ec6e0) (0xc000689b80) Stream removed, broadcasting: 1\nI0120 23:50:07.402609 778 log.go:172] (0xc0008ec6e0) (0xc0008da000) Stream removed, broadcasting: 5\nI0120 23:50:07.402637 778 log.go:172] (0xc0008ec6e0) Go away received\nI0120 23:50:07.403684 778 log.go:172] (0xc0008ec6e0) (0xc000689b80) Stream removed, broadcasting: 1\nI0120 23:50:07.403696 778 log.go:172] (0xc0008ec6e0) (0xc000689c20) Stream removed, broadcasting: 3\nI0120 23:50:07.403702 778 log.go:172] (0xc0008ec6e0) (0xc0008da000) Stream removed, broadcasting: 5\nI0120 23:50:07.403723 778 log.go:172] (0xc0008ec6e0) (0xc000a3a140) Stream removed, broadcasting: 7\n" Jan 20 23:50:07.438: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:50:09.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9986" for this suite. • [SLOW TEST:12.149 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1945 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":27,"skipped":492,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:50:09.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 20 23:50:09.612: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR Jan 20 23:50:10.442: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-20T23:50:10Z generation:1 name:name1 resourceVersion:3286495 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9de1d2ab-7661-425d-adf7-93a0d72cc910] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR Jan 20 23:50:20.454: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-20T23:50:20Z generation:1 name:name2 resourceVersion:3286529 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ecc503df-105c-48e0-99ad-1c195cd615da] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR Jan 20 23:50:30.470: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-20T23:50:10Z generation:2 name:name1 resourceVersion:3286553 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9de1d2ab-7661-425d-adf7-93a0d72cc910] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR Jan 20 23:50:40.482: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-20T23:50:20Z generation:2 name:name2 resourceVersion:3286577 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ecc503df-105c-48e0-99ad-1c195cd615da] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR Jan 20 23:50:50.500: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-20T23:50:10Z generation:2 name:name1 resourceVersion:3286603 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:9de1d2ab-7661-425d-adf7-93a0d72cc910] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR Jan 20 23:51:00.520: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-01-20T23:50:20Z generation:2 name:name2 resourceVersion:3286627 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:ecc503df-105c-48e0-99ad-1c195cd615da] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:51:11.044: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-9803" for this suite. • [SLOW TEST:61.597 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":28,"skipped":544,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:51:11.066: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:52:11.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2919" for this suite. • [SLOW TEST:60.179 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":571,"failed":0} SSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:52:11.245: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Performing setup for networking test in namespace pod-network-test-6406 STEP: creating a selector STEP: Creating the service pods in kubernetes Jan 20 23:52:11.313: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Jan 20 23:52:43.477: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=http&host=10.44.0.2&port=8080&tries=1'] Namespace:pod-network-test-6406 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 23:52:43.477: INFO: >>> kubeConfig: /root/.kube/config I0120 23:52:43.535996 8 log.go:172] (0xc0026ebc30) (0xc002c2c280) Create stream I0120 23:52:43.536122 8 log.go:172] (0xc0026ebc30) (0xc002c2c280) Stream added, broadcasting: 1 I0120 23:52:43.547545 8 log.go:172] (0xc0026ebc30) Reply frame received for 1 I0120 23:52:43.547611 8 log.go:172] (0xc0026ebc30) (0xc002c2c3c0) Create stream I0120 23:52:43.547623 8 log.go:172] (0xc0026ebc30) (0xc002c2c3c0) Stream added, broadcasting: 3 I0120 23:52:43.555328 8 log.go:172] (0xc0026ebc30) Reply frame received for 3 I0120 23:52:43.555412 8 log.go:172] (0xc0026ebc30) (0xc00185b7c0) Create stream I0120 23:52:43.555428 8 log.go:172] (0xc0026ebc30) (0xc00185b7c0) Stream added, broadcasting: 5 I0120 23:52:43.562507 8 log.go:172] (0xc0026ebc30) Reply frame received for 5 I0120 23:52:43.675283 8 log.go:172] (0xc0026ebc30) Data frame received for 3 I0120 23:52:43.675515 8 log.go:172] (0xc002c2c3c0) (3) Data frame handling I0120 23:52:43.675554 8 log.go:172] (0xc002c2c3c0) (3) Data frame sent I0120 23:52:43.761639 8 log.go:172] (0xc0026ebc30) Data frame received for 1 I0120 23:52:43.761776 8 log.go:172] (0xc0026ebc30) (0xc002c2c3c0) Stream removed, broadcasting: 3 I0120 23:52:43.761836 8 log.go:172] (0xc002c2c280) (1) Data frame handling I0120 23:52:43.761883 8 log.go:172] (0xc0026ebc30) (0xc00185b7c0) Stream removed, broadcasting: 5 I0120 23:52:43.761955 8 log.go:172] (0xc002c2c280) (1) Data frame sent I0120 23:52:43.761970 8 log.go:172] (0xc0026ebc30) (0xc002c2c280) Stream removed, broadcasting: 1 I0120 23:52:43.762001 8 log.go:172] (0xc0026ebc30) Go away received I0120 23:52:43.762928 8 log.go:172] (0xc0026ebc30) (0xc002c2c280) Stream removed, broadcasting: 1 I0120 23:52:43.763125 8 log.go:172] (0xc0026ebc30) (0xc002c2c3c0) Stream removed, broadcasting: 3 I0120 23:52:43.763181 8 log.go:172] (0xc0026ebc30) (0xc00185b7c0) Stream removed, broadcasting: 5 Jan 20 23:52:43.763: INFO: Waiting for responses: map[] Jan 20 23:52:43.771: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.1:8080/dial?request=hostname&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-6406 PodName:test-container-pod ContainerName:webserver Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Jan 20 23:52:43.771: INFO: >>> kubeConfig: /root/.kube/config I0120 23:52:43.832104 8 log.go:172] (0xc002754dc0) (0xc0017d8640) Create stream I0120 23:52:43.832348 8 log.go:172] (0xc002754dc0) (0xc0017d8640) Stream added, broadcasting: 1 I0120 23:52:43.836051 8 log.go:172] (0xc002754dc0) Reply frame received for 1 I0120 23:52:43.836122 8 log.go:172] (0xc002754dc0) (0xc00185b860) Create stream I0120 23:52:43.836138 8 log.go:172] (0xc002754dc0) (0xc00185b860) Stream added, broadcasting: 3 I0120 23:52:43.839319 8 log.go:172] (0xc002754dc0) Reply frame received for 3 I0120 23:52:43.839535 8 log.go:172] (0xc002754dc0) (0xc001733040) Create stream I0120 23:52:43.839574 8 log.go:172] (0xc002754dc0) (0xc001733040) Stream added, broadcasting: 5 I0120 23:52:43.841879 8 log.go:172] (0xc002754dc0) Reply frame received for 5 I0120 23:52:43.966575 8 log.go:172] (0xc002754dc0) Data frame received for 3 I0120 23:52:43.966802 8 log.go:172] (0xc00185b860) (3) Data frame handling I0120 23:52:43.966879 8 log.go:172] (0xc00185b860) (3) Data frame sent I0120 23:52:44.048732 8 log.go:172] (0xc002754dc0) Data frame received for 1 I0120 23:52:44.048917 8 log.go:172] (0xc002754dc0) (0xc001733040) Stream removed, broadcasting: 5 I0120 23:52:44.048968 8 log.go:172] (0xc0017d8640) (1) Data frame handling I0120 23:52:44.049005 8 log.go:172] (0xc0017d8640) (1) Data frame sent I0120 23:52:44.049113 8 log.go:172] (0xc002754dc0) (0xc00185b860) Stream removed, broadcasting: 3 I0120 23:52:44.049162 8 log.go:172] (0xc002754dc0) (0xc0017d8640) Stream removed, broadcasting: 1 I0120 23:52:44.049216 8 log.go:172] (0xc002754dc0) Go away received I0120 23:52:44.050045 8 log.go:172] (0xc002754dc0) (0xc0017d8640) Stream removed, broadcasting: 1 I0120 23:52:44.050082 8 log.go:172] (0xc002754dc0) (0xc00185b860) Stream removed, broadcasting: 3 I0120 23:52:44.050111 8 log.go:172] (0xc002754dc0) (0xc001733040) Stream removed, broadcasting: 5 Jan 20 23:52:44.050: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:52:44.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6406" for this suite. • [SLOW TEST:32.818 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":574,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:52:44.065: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:52:57.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9918" for this suite. • [SLOW TEST:13.852 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":31,"skipped":629,"failed":0} [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:52:57.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87 Jan 20 23:52:58.121: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready Jan 20 23:52:58.137: INFO: Waiting for terminating namespaces to be deleted... Jan 20 23:52:58.142: INFO: Logging pods the kubelet thinks is on node jerma-node before test Jan 20 23:52:58.172: INFO: test-container-pod from pod-network-test-6406 started at 2020-01-20 23:52:35 +0000 UTC (1 container statuses recorded) Jan 20 23:52:58.173: INFO: Container webserver ready: false, restart count 0 Jan 20 23:52:58.173: INFO: netserver-0 from pod-network-test-6406 started at 2020-01-20 23:52:11 +0000 UTC (1 container statuses recorded) Jan 20 23:52:58.173: INFO: Container webserver ready: false, restart count 0 Jan 20 23:52:58.173: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded) Jan 20 23:52:58.173: INFO: Container kube-proxy ready: true, restart count 0 Jan 20 23:52:58.173: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded) Jan 20 23:52:58.173: INFO: Container weave ready: true, restart count 1 Jan 20 23:52:58.173: INFO: Container weave-npc ready: true, restart count 0 Jan 20 23:52:58.173: INFO: Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test Jan 20 23:52:58.196: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded) Jan 20 23:52:58.196: INFO: Container kube-proxy ready: true, restart count 0 Jan 20 23:52:58.196: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded) Jan 20 23:52:58.197: INFO: Container weave ready: true, restart count 0 Jan 20 23:52:58.197: INFO: Container weave-npc ready: true, restart count 0 Jan 20 23:52:58.197: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 20 23:52:58.197: INFO: Container kube-controller-manager ready: true, restart count 3 Jan 20 23:52:58.197: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 20 23:52:58.197: INFO: Container kube-scheduler ready: true, restart count 3 Jan 20 23:52:58.197: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded) Jan 20 23:52:58.197: INFO: Container etcd ready: true, restart count 1 Jan 20 23:52:58.197: INFO: netserver-1 from pod-network-test-6406 started at 2020-01-20 23:52:11 +0000 UTC (1 container statuses recorded) Jan 20 23:52:58.197: INFO: Container webserver ready: false, restart count 0 Jan 20 23:52:58.197: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded) Jan 20 23:52:58.197: INFO: Container kube-apiserver ready: true, restart count 1 Jan 20 23:52:58.197: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 20 23:52:58.197: INFO: Container coredns ready: true, restart count 0 Jan 20 23:52:58.197: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded) Jan 20 23:52:58.197: INFO: Container coredns ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-e653fd87-552d-47a3-81e6-64ed95baaf1e 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-e653fd87-552d-47a3-81e6-64ed95baaf1e off the node jerma-node STEP: verifying the node doesn't have the label kubernetes.io/e2e-e653fd87-552d-47a3-81e6-64ed95baaf1e [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:58:14.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2561" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78 • [SLOW TEST:316.656 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":32,"skipped":629,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:58:14.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward API volume plugin Jan 20 23:58:14.717: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6500a205-00ac-48e9-af45-9312464b6410" in namespace "downward-api-3313" to be "success or failure" Jan 20 23:58:14.727: INFO: Pod "downwardapi-volume-6500a205-00ac-48e9-af45-9312464b6410": Phase="Pending", Reason="", readiness=false. Elapsed: 9.654891ms Jan 20 23:58:16.734: INFO: Pod "downwardapi-volume-6500a205-00ac-48e9-af45-9312464b6410": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016784162s Jan 20 23:58:18.739: INFO: Pod "downwardapi-volume-6500a205-00ac-48e9-af45-9312464b6410": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022533242s Jan 20 23:58:20.773: INFO: Pod "downwardapi-volume-6500a205-00ac-48e9-af45-9312464b6410": Phase="Pending", Reason="", readiness=false. Elapsed: 6.055834743s Jan 20 23:58:22.781: INFO: Pod "downwardapi-volume-6500a205-00ac-48e9-af45-9312464b6410": Phase="Pending", Reason="", readiness=false. Elapsed: 8.063959041s Jan 20 23:58:24.787: INFO: Pod "downwardapi-volume-6500a205-00ac-48e9-af45-9312464b6410": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.069734674s STEP: Saw pod success Jan 20 23:58:24.787: INFO: Pod "downwardapi-volume-6500a205-00ac-48e9-af45-9312464b6410" satisfied condition "success or failure" Jan 20 23:58:24.790: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6500a205-00ac-48e9-af45-9312464b6410 container client-container: STEP: delete the pod Jan 20 23:58:25.398: INFO: Waiting for pod downwardapi-volume-6500a205-00ac-48e9-af45-9312464b6410 to disappear Jan 20 23:58:25.406: INFO: Pod downwardapi-volume-6500a205-00ac-48e9-af45-9312464b6410 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:58:25.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3313" for this suite. • [SLOW TEST:10.900 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":33,"skipped":642,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:58:25.474: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-e3a87f57-f3e3-43f8-a6c3-0bcfaeaa577a STEP: Creating a pod to test consume secrets Jan 20 23:58:25.723: INFO: Waiting up to 5m0s for pod "pod-secrets-5b6effc8-62ed-4e50-8824-7e9026c4cf8a" in namespace "secrets-5441" to be "success or failure" Jan 20 23:58:25.766: INFO: Pod "pod-secrets-5b6effc8-62ed-4e50-8824-7e9026c4cf8a": Phase="Pending", Reason="", readiness=false. Elapsed: 42.447043ms Jan 20 23:58:27.774: INFO: Pod "pod-secrets-5b6effc8-62ed-4e50-8824-7e9026c4cf8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049992418s Jan 20 23:58:29.781: INFO: Pod "pod-secrets-5b6effc8-62ed-4e50-8824-7e9026c4cf8a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057577731s Jan 20 23:58:31.793: INFO: Pod "pod-secrets-5b6effc8-62ed-4e50-8824-7e9026c4cf8a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.069093417s Jan 20 23:58:33.802: INFO: Pod "pod-secrets-5b6effc8-62ed-4e50-8824-7e9026c4cf8a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.078403029s Jan 20 23:58:35.808: INFO: Pod "pod-secrets-5b6effc8-62ed-4e50-8824-7e9026c4cf8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.084935666s STEP: Saw pod success Jan 20 23:58:35.809: INFO: Pod "pod-secrets-5b6effc8-62ed-4e50-8824-7e9026c4cf8a" satisfied condition "success or failure" Jan 20 23:58:35.812: INFO: Trying to get logs from node jerma-node pod pod-secrets-5b6effc8-62ed-4e50-8824-7e9026c4cf8a container secret-volume-test: STEP: delete the pod Jan 20 23:58:35.869: INFO: Waiting for pod pod-secrets-5b6effc8-62ed-4e50-8824-7e9026c4cf8a to disappear Jan 20 23:58:35.873: INFO: Pod pod-secrets-5b6effc8-62ed-4e50-8824-7e9026c4cf8a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:58:35.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5441" for this suite. • [SLOW TEST:10.415 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":645,"failed":0} SSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:58:35.890: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating configMap with name cm-test-opt-del-35640441-87d3-4488-a118-fd6de4dce12e STEP: Creating configMap with name cm-test-opt-upd-54a1978e-5d38-443c-a041-f6fac5aedc1a STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-35640441-87d3-4488-a118-fd6de4dce12e STEP: Updating configmap cm-test-opt-upd-54a1978e-5d38-443c-a041-f6fac5aedc1a STEP: Creating configMap with name cm-test-opt-create-1b511ce2-4120-49e5-aa93-60f6a844ba28 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 20 23:58:50.196: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1109" for this suite. • [SLOW TEST:14.322 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":648,"failed":0} S ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 20 23:58:50.213: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8597 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating stateful set ss in namespace statefulset-8597 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-8597 Jan 20 23:58:50.374: INFO: Found 0 stateful pods, waiting for 1 Jan 20 23:59:00.411: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false Jan 20 23:59:10.382: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod Jan 20 23:59:10.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 20 23:59:13.007: INFO: stderr: "I0120 23:59:12.682395 801 log.go:172] (0xc0003b54a0) (0xc0008320a0) Create stream\nI0120 23:59:12.682992 801 log.go:172] (0xc0003b54a0) (0xc0008320a0) Stream added, broadcasting: 1\nI0120 23:59:12.696596 801 log.go:172] (0xc0003b54a0) Reply frame received for 1\nI0120 23:59:12.697024 801 log.go:172] (0xc0003b54a0) (0xc0004ec000) Create stream\nI0120 23:59:12.697087 801 log.go:172] (0xc0003b54a0) (0xc0004ec000) Stream added, broadcasting: 3\nI0120 23:59:12.701478 801 log.go:172] (0xc0003b54a0) Reply frame received for 3\nI0120 23:59:12.702096 801 log.go:172] (0xc0003b54a0) (0xc00068de00) Create stream\nI0120 23:59:12.702233 801 log.go:172] (0xc0003b54a0) (0xc00068de00) Stream added, broadcasting: 5\nI0120 23:59:12.711395 801 log.go:172] (0xc0003b54a0) Reply frame received for 5\nI0120 23:59:12.842965 801 log.go:172] (0xc0003b54a0) Data frame received for 5\nI0120 23:59:12.843113 801 log.go:172] (0xc00068de00) (5) Data frame handling\nI0120 23:59:12.843171 801 log.go:172] (0xc00068de00) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0120 23:59:12.905509 801 log.go:172] (0xc0003b54a0) Data frame received for 3\nI0120 23:59:12.905580 801 log.go:172] (0xc0004ec000) (3) Data frame handling\nI0120 23:59:12.905598 801 log.go:172] (0xc0004ec000) (3) Data frame sent\nI0120 23:59:12.996572 801 log.go:172] (0xc0003b54a0) Data frame received for 1\nI0120 23:59:12.996748 801 log.go:172] (0xc0003b54a0) (0xc0004ec000) Stream removed, broadcasting: 3\nI0120 23:59:12.996856 801 log.go:172] (0xc0008320a0) (1) Data frame handling\nI0120 23:59:12.996886 801 log.go:172] (0xc0008320a0) (1) Data frame sent\nI0120 23:59:12.996894 801 log.go:172] (0xc0003b54a0) (0xc0008320a0) Stream removed, broadcasting: 1\nI0120 23:59:12.997268 801 log.go:172] (0xc0003b54a0) (0xc00068de00) Stream removed, broadcasting: 5\nI0120 23:59:12.997716 801 log.go:172] (0xc0003b54a0) Go away received\nI0120 23:59:12.998435 801 log.go:172] (0xc0003b54a0) (0xc0008320a0) Stream removed, broadcasting: 1\nI0120 23:59:12.998448 801 log.go:172] (0xc0003b54a0) (0xc0004ec000) Stream removed, broadcasting: 3\nI0120 23:59:12.998457 801 log.go:172] (0xc0003b54a0) (0xc00068de00) Stream removed, broadcasting: 5\n" Jan 20 23:59:13.007: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 20 23:59:13.007: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 20 23:59:13.011: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 20 23:59:13.011: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 23:59:13.027: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 23:59:13.027: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:12 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC }] Jan 20 23:59:13.027: INFO: Jan 20 23:59:13.027: INFO: StatefulSet ss has not reached scale 3, at 1 Jan 20 23:59:14.480: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993048324s Jan 20 23:59:15.715: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.539706973s Jan 20 23:59:16.723: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.305191395s Jan 20 23:59:17.731: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.296792658s Jan 20 23:59:19.608: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.288745383s Jan 20 23:59:20.927: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.411773294s Jan 20 23:59:22.071: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.092974099s STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-8597 Jan 20 23:59:23.078: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 23:59:23.504: INFO: stderr: "I0120 23:59:23.349213 832 log.go:172] (0xc0000f4370) (0xc00050f4a0) Create stream\nI0120 23:59:23.349495 832 log.go:172] (0xc0000f4370) (0xc00050f4a0) Stream added, broadcasting: 1\nI0120 23:59:23.353968 832 log.go:172] (0xc0000f4370) Reply frame received for 1\nI0120 23:59:23.354001 832 log.go:172] (0xc0000f4370) (0xc0006a5b80) Create stream\nI0120 23:59:23.354011 832 log.go:172] (0xc0000f4370) (0xc0006a5b80) Stream added, broadcasting: 3\nI0120 23:59:23.355214 832 log.go:172] (0xc0000f4370) Reply frame received for 3\nI0120 23:59:23.355233 832 log.go:172] (0xc0000f4370) (0xc0006a5d60) Create stream\nI0120 23:59:23.355239 832 log.go:172] (0xc0000f4370) (0xc0006a5d60) Stream added, broadcasting: 5\nI0120 23:59:23.356967 832 log.go:172] (0xc0000f4370) Reply frame received for 5\nI0120 23:59:23.415235 832 log.go:172] (0xc0000f4370) Data frame received for 5\nI0120 23:59:23.415330 832 log.go:172] (0xc0006a5d60) (5) Data frame handling\nI0120 23:59:23.415370 832 log.go:172] (0xc0000f4370) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0120 23:59:23.415393 832 log.go:172] (0xc0006a5b80) (3) Data frame handling\nI0120 23:59:23.415447 832 log.go:172] (0xc0006a5b80) (3) Data frame sent\nI0120 23:59:23.415511 832 log.go:172] (0xc0006a5d60) (5) Data frame sent\nI0120 23:59:23.494696 832 log.go:172] (0xc0000f4370) (0xc0006a5d60) Stream removed, broadcasting: 5\nI0120 23:59:23.494764 832 log.go:172] (0xc0000f4370) Data frame received for 1\nI0120 23:59:23.494782 832 log.go:172] (0xc0000f4370) (0xc0006a5b80) Stream removed, broadcasting: 3\nI0120 23:59:23.494831 832 log.go:172] (0xc00050f4a0) (1) Data frame handling\nI0120 23:59:23.494855 832 log.go:172] (0xc00050f4a0) (1) Data frame sent\nI0120 23:59:23.494867 832 log.go:172] (0xc0000f4370) (0xc00050f4a0) Stream removed, broadcasting: 1\nI0120 23:59:23.494876 832 log.go:172] (0xc0000f4370) Go away received\nI0120 23:59:23.496330 832 log.go:172] (0xc0000f4370) (0xc00050f4a0) Stream removed, broadcasting: 1\nI0120 23:59:23.496357 832 log.go:172] (0xc0000f4370) (0xc0006a5b80) Stream removed, broadcasting: 3\nI0120 23:59:23.496361 832 log.go:172] (0xc0000f4370) (0xc0006a5d60) Stream removed, broadcasting: 5\n" Jan 20 23:59:23.504: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 20 23:59:23.504: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 20 23:59:23.504: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 23:59:23.996: INFO: stderr: "I0120 23:59:23.709864 855 log.go:172] (0xc000a5cfd0) (0xc000ae4500) Create stream\nI0120 23:59:23.710125 855 log.go:172] (0xc000a5cfd0) (0xc000ae4500) Stream added, broadcasting: 1\nI0120 23:59:23.713724 855 log.go:172] (0xc000a5cfd0) Reply frame received for 1\nI0120 23:59:23.713773 855 log.go:172] (0xc000a5cfd0) (0xc000723d60) Create stream\nI0120 23:59:23.713796 855 log.go:172] (0xc000a5cfd0) (0xc000723d60) Stream added, broadcasting: 3\nI0120 23:59:23.714693 855 log.go:172] (0xc000a5cfd0) Reply frame received for 3\nI0120 23:59:23.714718 855 log.go:172] (0xc000a5cfd0) (0xc000c4a5a0) Create stream\nI0120 23:59:23.714727 855 log.go:172] (0xc000a5cfd0) (0xc000c4a5a0) Stream added, broadcasting: 5\nI0120 23:59:23.715801 855 log.go:172] (0xc000a5cfd0) Reply frame received for 5\nI0120 23:59:23.812712 855 log.go:172] (0xc000a5cfd0) Data frame received for 3\nI0120 23:59:23.812833 855 log.go:172] (0xc000723d60) (3) Data frame handling\nI0120 23:59:23.812871 855 log.go:172] (0xc000723d60) (3) Data frame sent\nI0120 23:59:23.814764 855 log.go:172] (0xc000a5cfd0) Data frame received for 5\nI0120 23:59:23.814833 855 log.go:172] (0xc000c4a5a0) (5) Data frame handling\nI0120 23:59:23.814859 855 log.go:172] (0xc000c4a5a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\nI0120 23:59:23.815718 855 log.go:172] (0xc000a5cfd0) Data frame received for 5\nI0120 23:59:23.815733 855 log.go:172] (0xc000c4a5a0) (5) Data frame handling\nI0120 23:59:23.815741 855 log.go:172] (0xc000c4a5a0) (5) Data frame sent\n+ true\nI0120 23:59:23.975949 855 log.go:172] (0xc000a5cfd0) (0xc000723d60) Stream removed, broadcasting: 3\nI0120 23:59:23.976452 855 log.go:172] (0xc000a5cfd0) Data frame received for 1\nI0120 23:59:23.976480 855 log.go:172] (0xc000ae4500) (1) Data frame handling\nI0120 23:59:23.976534 855 log.go:172] (0xc000ae4500) (1) Data frame sent\nI0120 23:59:23.976566 855 log.go:172] (0xc000a5cfd0) (0xc000ae4500) Stream removed, broadcasting: 1\nI0120 23:59:23.976848 855 log.go:172] (0xc000a5cfd0) (0xc000c4a5a0) Stream removed, broadcasting: 5\nI0120 23:59:23.977164 855 log.go:172] (0xc000a5cfd0) Go away received\nI0120 23:59:23.978800 855 log.go:172] (0xc000a5cfd0) (0xc000ae4500) Stream removed, broadcasting: 1\nI0120 23:59:23.978825 855 log.go:172] (0xc000a5cfd0) (0xc000723d60) Stream removed, broadcasting: 3\nI0120 23:59:23.978836 855 log.go:172] (0xc000a5cfd0) (0xc000c4a5a0) Stream removed, broadcasting: 5\n" Jan 20 23:59:23.997: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 20 23:59:23.997: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 20 23:59:23.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 23:59:24.264: INFO: stderr: "I0120 23:59:24.125101 875 log.go:172] (0xc000a35290) (0xc000ae4460) Create stream\nI0120 23:59:24.125233 875 log.go:172] (0xc000a35290) (0xc000ae4460) Stream added, broadcasting: 1\nI0120 23:59:24.131244 875 log.go:172] (0xc000a35290) Reply frame received for 1\nI0120 23:59:24.131345 875 log.go:172] (0xc000a35290) (0xc0007edc20) Create stream\nI0120 23:59:24.131376 875 log.go:172] (0xc000a35290) (0xc0007edc20) Stream added, broadcasting: 3\nI0120 23:59:24.132557 875 log.go:172] (0xc000a35290) Reply frame received for 3\nI0120 23:59:24.132588 875 log.go:172] (0xc000a35290) (0xc000670820) Create stream\nI0120 23:59:24.132593 875 log.go:172] (0xc000a35290) (0xc000670820) Stream added, broadcasting: 5\nI0120 23:59:24.133806 875 log.go:172] (0xc000a35290) Reply frame received for 5\nI0120 23:59:24.196897 875 log.go:172] (0xc000a35290) Data frame received for 5\nI0120 23:59:24.196959 875 log.go:172] (0xc000670820) (5) Data frame handling\nI0120 23:59:24.196979 875 log.go:172] (0xc000670820) (5) Data frame sent\nI0120 23:59:24.196987 875 log.go:172] (0xc000a35290) Data frame received for 3\nI0120 23:59:24.196993 875 log.go:172] (0xc0007edc20) (3) Data frame handling\nI0120 23:59:24.196999 875 log.go:172] (0xc0007edc20) (3) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0120 23:59:24.257819 875 log.go:172] (0xc000a35290) Data frame received for 1\nI0120 23:59:24.257982 875 log.go:172] (0xc000ae4460) (1) Data frame handling\nI0120 23:59:24.258053 875 log.go:172] (0xc000ae4460) (1) Data frame sent\nI0120 23:59:24.258432 875 log.go:172] (0xc000a35290) (0xc000ae4460) Stream removed, broadcasting: 1\nI0120 23:59:24.258728 875 log.go:172] (0xc000a35290) (0xc0007edc20) Stream removed, broadcasting: 3\nI0120 23:59:24.258917 875 log.go:172] (0xc000a35290) (0xc000670820) Stream removed, broadcasting: 5\nI0120 23:59:24.258983 875 log.go:172] (0xc000a35290) Go away received\nI0120 23:59:24.259382 875 log.go:172] (0xc000a35290) (0xc000ae4460) Stream removed, broadcasting: 1\nI0120 23:59:24.259397 875 log.go:172] (0xc000a35290) (0xc0007edc20) Stream removed, broadcasting: 3\nI0120 23:59:24.259405 875 log.go:172] (0xc000a35290) (0xc000670820) Stream removed, broadcasting: 5\n" Jan 20 23:59:24.264: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" Jan 20 23:59:24.264: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' Jan 20 23:59:24.270: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 20 23:59:24.270: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 20 23:59:24.270: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=false Jan 20 23:59:34.277: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true Jan 20 23:59:34.277: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true Jan 20 23:59:34.277: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod Jan 20 23:59:34.283: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 20 23:59:34.763: INFO: stderr: "I0120 23:59:34.586954 893 log.go:172] (0xc0008cea50) (0xc0006c6140) Create stream\nI0120 23:59:34.587498 893 log.go:172] (0xc0008cea50) (0xc0006c6140) Stream added, broadcasting: 1\nI0120 23:59:34.593797 893 log.go:172] (0xc0008cea50) Reply frame received for 1\nI0120 23:59:34.593902 893 log.go:172] (0xc0008cea50) (0xc0006e9ae0) Create stream\nI0120 23:59:34.593920 893 log.go:172] (0xc0008cea50) (0xc0006e9ae0) Stream added, broadcasting: 3\nI0120 23:59:34.596287 893 log.go:172] (0xc0008cea50) Reply frame received for 3\nI0120 23:59:34.596324 893 log.go:172] (0xc0008cea50) (0xc0006c61e0) Create stream\nI0120 23:59:34.596342 893 log.go:172] (0xc0008cea50) (0xc0006c61e0) Stream added, broadcasting: 5\nI0120 23:59:34.600584 893 log.go:172] (0xc0008cea50) Reply frame received for 5\nI0120 23:59:34.674573 893 log.go:172] (0xc0008cea50) Data frame received for 5\nI0120 23:59:34.674682 893 log.go:172] (0xc0006c61e0) (5) Data frame handling\nI0120 23:59:34.674712 893 log.go:172] (0xc0006c61e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0120 23:59:34.674749 893 log.go:172] (0xc0008cea50) Data frame received for 3\nI0120 23:59:34.674770 893 log.go:172] (0xc0006e9ae0) (3) Data frame handling\nI0120 23:59:34.674804 893 log.go:172] (0xc0006e9ae0) (3) Data frame sent\nI0120 23:59:34.750582 893 log.go:172] (0xc0008cea50) Data frame received for 1\nI0120 23:59:34.750766 893 log.go:172] (0xc0008cea50) (0xc0006e9ae0) Stream removed, broadcasting: 3\nI0120 23:59:34.750870 893 log.go:172] (0xc0006c6140) (1) Data frame handling\nI0120 23:59:34.750920 893 log.go:172] (0xc0006c6140) (1) Data frame sent\nI0120 23:59:34.751009 893 log.go:172] (0xc0008cea50) (0xc0006c61e0) Stream removed, broadcasting: 5\nI0120 23:59:34.751093 893 log.go:172] (0xc0008cea50) (0xc0006c6140) Stream removed, broadcasting: 1\nI0120 23:59:34.751184 893 log.go:172] (0xc0008cea50) Go away received\nI0120 23:59:34.752492 893 log.go:172] (0xc0008cea50) (0xc0006c6140) Stream removed, broadcasting: 1\nI0120 23:59:34.752579 893 log.go:172] (0xc0008cea50) (0xc0006e9ae0) Stream removed, broadcasting: 3\nI0120 23:59:34.752602 893 log.go:172] (0xc0008cea50) (0xc0006c61e0) Stream removed, broadcasting: 5\n" Jan 20 23:59:34.763: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 20 23:59:34.763: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 20 23:59:34.764: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 20 23:59:35.339: INFO: stderr: "I0120 23:59:35.006078 914 log.go:172] (0xc00020d3f0) (0xc000b2a000) Create stream\nI0120 23:59:35.006280 914 log.go:172] (0xc00020d3f0) (0xc000b2a000) Stream added, broadcasting: 1\nI0120 23:59:35.009321 914 log.go:172] (0xc00020d3f0) Reply frame received for 1\nI0120 23:59:35.009354 914 log.go:172] (0xc00020d3f0) (0xc0006bbb80) Create stream\nI0120 23:59:35.009366 914 log.go:172] (0xc00020d3f0) (0xc0006bbb80) Stream added, broadcasting: 3\nI0120 23:59:35.010467 914 log.go:172] (0xc00020d3f0) Reply frame received for 3\nI0120 23:59:35.010484 914 log.go:172] (0xc00020d3f0) (0xc0006bbd60) Create stream\nI0120 23:59:35.010488 914 log.go:172] (0xc00020d3f0) (0xc0006bbd60) Stream added, broadcasting: 5\nI0120 23:59:35.011760 914 log.go:172] (0xc00020d3f0) Reply frame received for 5\nI0120 23:59:35.118678 914 log.go:172] (0xc00020d3f0) Data frame received for 5\nI0120 23:59:35.118798 914 log.go:172] (0xc0006bbd60) (5) Data frame handling\nI0120 23:59:35.118815 914 log.go:172] (0xc0006bbd60) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0120 23:59:35.189527 914 log.go:172] (0xc00020d3f0) Data frame received for 3\nI0120 23:59:35.189794 914 log.go:172] (0xc0006bbb80) (3) Data frame handling\nI0120 23:59:35.189839 914 log.go:172] (0xc0006bbb80) (3) Data frame sent\nI0120 23:59:35.320930 914 log.go:172] (0xc00020d3f0) Data frame received for 1\nI0120 23:59:35.321494 914 log.go:172] (0xc00020d3f0) (0xc0006bbb80) Stream removed, broadcasting: 3\nI0120 23:59:35.321565 914 log.go:172] (0xc000b2a000) (1) Data frame handling\nI0120 23:59:35.321633 914 log.go:172] (0xc00020d3f0) (0xc0006bbd60) Stream removed, broadcasting: 5\nI0120 23:59:35.321733 914 log.go:172] (0xc000b2a000) (1) Data frame sent\nI0120 23:59:35.321759 914 log.go:172] (0xc00020d3f0) (0xc000b2a000) Stream removed, broadcasting: 1\nI0120 23:59:35.321786 914 log.go:172] (0xc00020d3f0) Go away received\nI0120 23:59:35.323411 914 log.go:172] (0xc00020d3f0) (0xc000b2a000) Stream removed, broadcasting: 1\nI0120 23:59:35.323441 914 log.go:172] (0xc00020d3f0) (0xc0006bbb80) Stream removed, broadcasting: 3\nI0120 23:59:35.323460 914 log.go:172] (0xc00020d3f0) (0xc0006bbd60) Stream removed, broadcasting: 5\n" Jan 20 23:59:35.339: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 20 23:59:35.339: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 20 23:59:35.340: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' Jan 20 23:59:35.898: INFO: stderr: "I0120 23:59:35.576789 933 log.go:172] (0xc000118420) (0xc0005a6000) Create stream\nI0120 23:59:35.577295 933 log.go:172] (0xc000118420) (0xc0005a6000) Stream added, broadcasting: 1\nI0120 23:59:35.585261 933 log.go:172] (0xc000118420) Reply frame received for 1\nI0120 23:59:35.585416 933 log.go:172] (0xc000118420) (0xc000530000) Create stream\nI0120 23:59:35.585465 933 log.go:172] (0xc000118420) (0xc000530000) Stream added, broadcasting: 3\nI0120 23:59:35.589348 933 log.go:172] (0xc000118420) Reply frame received for 3\nI0120 23:59:35.589430 933 log.go:172] (0xc000118420) (0xc0005a6140) Create stream\nI0120 23:59:35.589442 933 log.go:172] (0xc000118420) (0xc0005a6140) Stream added, broadcasting: 5\nI0120 23:59:35.591630 933 log.go:172] (0xc000118420) Reply frame received for 5\nI0120 23:59:35.676810 933 log.go:172] (0xc000118420) Data frame received for 5\nI0120 23:59:35.677033 933 log.go:172] (0xc0005a6140) (5) Data frame handling\nI0120 23:59:35.677078 933 log.go:172] (0xc0005a6140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0120 23:59:35.732524 933 log.go:172] (0xc000118420) Data frame received for 3\nI0120 23:59:35.732658 933 log.go:172] (0xc000530000) (3) Data frame handling\nI0120 23:59:35.732688 933 log.go:172] (0xc000530000) (3) Data frame sent\nI0120 23:59:35.874850 933 log.go:172] (0xc000118420) (0xc000530000) Stream removed, broadcasting: 3\nI0120 23:59:35.875870 933 log.go:172] (0xc000118420) Data frame received for 1\nI0120 23:59:35.876044 933 log.go:172] (0xc0005a6000) (1) Data frame handling\nI0120 23:59:35.876121 933 log.go:172] (0xc0005a6000) (1) Data frame sent\nI0120 23:59:35.876158 933 log.go:172] (0xc000118420) (0xc0005a6000) Stream removed, broadcasting: 1\nI0120 23:59:35.876335 933 log.go:172] (0xc000118420) (0xc0005a6140) Stream removed, broadcasting: 5\nI0120 23:59:35.876828 933 log.go:172] (0xc000118420) Go away received\nI0120 23:59:35.878771 933 log.go:172] (0xc000118420) (0xc0005a6000) Stream removed, broadcasting: 1\nI0120 23:59:35.878879 933 log.go:172] (0xc000118420) (0xc000530000) Stream removed, broadcasting: 3\nI0120 23:59:35.878894 933 log.go:172] (0xc000118420) (0xc0005a6140) Stream removed, broadcasting: 5\n" Jan 20 23:59:35.899: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" Jan 20 23:59:35.899: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' Jan 20 23:59:35.899: INFO: Waiting for statefulset status.replicas updated to 0 Jan 20 23:59:35.910: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1 Jan 20 23:59:45.922: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false Jan 20 23:59:45.923: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false Jan 20 23:59:45.923: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false Jan 20 23:59:45.955: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 23:59:45.956: INFO: ss-0 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC }] Jan 20 23:59:45.956: INFO: ss-1 jerma-server-mvvl6gufaqub Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC }] Jan 20 23:59:45.956: INFO: ss-2 jerma-node Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC }] Jan 20 23:59:45.956: INFO: Jan 20 23:59:45.956: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 20 23:59:47.818: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 23:59:47.818: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC }] Jan 20 23:59:47.819: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC }] Jan 20 23:59:47.819: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC }] Jan 20 23:59:47.819: INFO: Jan 20 23:59:47.819: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 20 23:59:48.829: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 23:59:48.829: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC }] Jan 20 23:59:48.829: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC }] Jan 20 23:59:48.830: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC }] Jan 20 23:59:48.830: INFO: Jan 20 23:59:48.830: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 20 23:59:50.286: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 23:59:50.287: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC }] Jan 20 23:59:50.287: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC }] Jan 20 23:59:50.287: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC }] Jan 20 23:59:50.287: INFO: Jan 20 23:59:50.287: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 20 23:59:51.298: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 23:59:51.298: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC }] Jan 20 23:59:51.298: INFO: ss-1 jerma-server-mvvl6gufaqub Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC }] Jan 20 23:59:51.298: INFO: ss-2 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC }] Jan 20 23:59:51.298: INFO: Jan 20 23:59:51.298: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 20 23:59:52.305: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 23:59:52.306: INFO: ss-0 jerma-node Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC }] Jan 20 23:59:52.306: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC }] Jan 20 23:59:52.306: INFO: ss-2 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:36 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC }] Jan 20 23:59:52.306: INFO: Jan 20 23:59:52.306: INFO: StatefulSet ss has not reached scale 0, at 3 Jan 20 23:59:53.313: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 23:59:53.313: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC }] Jan 20 23:59:53.313: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC }] Jan 20 23:59:53.313: INFO: Jan 20 23:59:53.313: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 20 23:59:54.321: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 23:59:54.321: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC }] Jan 20 23:59:54.321: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC }] Jan 20 23:59:54.321: INFO: Jan 20 23:59:54.321: INFO: StatefulSet ss has not reached scale 0, at 2 Jan 20 23:59:55.329: INFO: POD NODE PHASE GRACE CONDITIONS Jan 20 23:59:55.329: INFO: ss-0 jerma-node Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:34 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:58:50 +0000 UTC }] Jan 20 23:59:55.330: INFO: ss-1 jerma-server-mvvl6gufaqub Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:35 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-01-20 23:59:13 +0000 UTC }] Jan 20 23:59:55.330: INFO: Jan 20 23:59:55.330: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-8597 Jan 20 23:59:56.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 20 23:59:56.633: INFO: rc: 1 Jan 20 23:59:56.633: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: error: unable to upgrade connection: container not found ("webserver") error: exit status 1 Jan 21 00:00:06.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:00:06.773: INFO: rc: 1 Jan 21 00:00:06.774: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:00:16.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:00:16.961: INFO: rc: 1 Jan 21 00:00:16.961: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:00:26.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:00:27.142: INFO: rc: 1 Jan 21 00:00:27.143: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:00:37.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:00:37.349: INFO: rc: 1 Jan 21 00:00:37.350: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:00:47.351: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:00:47.474: INFO: rc: 1 Jan 21 00:00:47.474: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:00:57.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:00:57.669: INFO: rc: 1 Jan 21 00:00:57.670: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:01:07.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:01:07.855: INFO: rc: 1 Jan 21 00:01:07.855: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:01:17.856: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:01:18.064: INFO: rc: 1 Jan 21 00:01:18.064: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:01:28.065: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:01:28.267: INFO: rc: 1 Jan 21 00:01:28.267: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:01:38.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:01:38.476: INFO: rc: 1 Jan 21 00:01:38.476: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:01:48.477: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:01:48.645: INFO: rc: 1 Jan 21 00:01:48.645: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:01:58.645: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:01:58.868: INFO: rc: 1 Jan 21 00:01:58.869: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:02:08.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:02:09.027: INFO: rc: 1 Jan 21 00:02:09.028: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:02:19.029: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:02:19.171: INFO: rc: 1 Jan 21 00:02:19.171: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:02:29.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:02:29.340: INFO: rc: 1 Jan 21 00:02:29.340: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:02:39.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:02:39.529: INFO: rc: 1 Jan 21 00:02:39.530: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:02:49.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:02:49.707: INFO: rc: 1 Jan 21 00:02:49.708: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:02:59.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:02:59.881: INFO: rc: 1 Jan 21 00:02:59.882: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:03:09.883: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:03:10.088: INFO: rc: 1 Jan 21 00:03:10.088: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:03:20.091: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:03:20.214: INFO: rc: 1 Jan 21 00:03:20.214: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:03:30.215: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:03:30.370: INFO: rc: 1 Jan 21 00:03:30.370: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:03:40.372: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:03:40.577: INFO: rc: 1 Jan 21 00:03:40.578: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:03:50.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:03:50.753: INFO: rc: 1 Jan 21 00:03:50.754: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:04:00.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:04:00.920: INFO: rc: 1 Jan 21 00:04:00.920: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:04:10.921: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:04:11.120: INFO: rc: 1 Jan 21 00:04:11.120: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:04:21.122: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:04:21.265: INFO: rc: 1 Jan 21 00:04:21.265: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:04:31.266: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:04:31.414: INFO: rc: 1 Jan 21 00:04:31.414: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:04:41.416: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:04:41.588: INFO: rc: 1 Jan 21 00:04:41.588: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:04:51.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:04:51.778: INFO: rc: 1 Jan 21 00:04:51.779: INFO: Waiting 10s to retry failed RunHostCmd: error running /usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true: Command stdout: stderr: Error from server (NotFound): pods "ss-0" not found error: exit status 1 Jan 21 00:05:01.780: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8597 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' Jan 21 00:05:02.025: INFO: rc: 1 Jan 21 00:05:02.026: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: Jan 21 00:05:02.026: INFO: Scaling statefulset ss to 0 Jan 21 00:05:02.053: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 Jan 21 00:05:02.057: INFO: Deleting all statefulset in ns statefulset-8597 Jan 21 00:05:02.059: INFO: Scaling statefulset ss to 0 Jan 21 00:05:02.072: INFO: Waiting for statefulset status.replicas updated to 0 Jan 21 00:05:02.076: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 21 00:05:02.144: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8597" for this suite. • [SLOW TEST:371.943 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":36,"skipped":649,"failed":0} S ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 21 00:05:02.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating pod Jan 21 00:05:10.324: INFO: Pod pod-hostip-90b42900-161b-4475-8ee8-02b1fe55504e has hostIP: 10.96.2.250 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 21 00:05:10.324: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7838" for this suite. • [SLOW TEST:8.188 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":650,"failed":0} SSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 21 00:05:10.345: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: creating the pod Jan 21 00:05:10.482: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 21 00:05:23.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6854" for this suite. • [SLOW TEST:13.110 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":38,"skipped":656,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 21 00:05:23.455: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test emptydir 0644 on tmpfs Jan 21 00:05:23.620: INFO: Waiting up to 5m0s for pod "pod-17bc7146-fc47-415c-b8dd-8aec3915b0d8" in namespace "emptydir-383" to be "success or failure" Jan 21 00:05:23.631: INFO: Pod "pod-17bc7146-fc47-415c-b8dd-8aec3915b0d8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.964799ms Jan 21 00:05:25.639: INFO: Pod "pod-17bc7146-fc47-415c-b8dd-8aec3915b0d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018252517s Jan 21 00:05:27.646: INFO: Pod "pod-17bc7146-fc47-415c-b8dd-8aec3915b0d8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.025195208s Jan 21 00:05:29.662: INFO: Pod "pod-17bc7146-fc47-415c-b8dd-8aec3915b0d8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041539476s Jan 21 00:05:31.742: INFO: Pod "pod-17bc7146-fc47-415c-b8dd-8aec3915b0d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.121450277s STEP: Saw pod success Jan 21 00:05:31.742: INFO: Pod "pod-17bc7146-fc47-415c-b8dd-8aec3915b0d8" satisfied condition "success or failure" Jan 21 00:05:31.746: INFO: Trying to get logs from node jerma-node pod pod-17bc7146-fc47-415c-b8dd-8aec3915b0d8 container test-container: STEP: delete the pod Jan 21 00:05:32.063: INFO: Waiting for pod pod-17bc7146-fc47-415c-b8dd-8aec3915b0d8 to disappear Jan 21 00:05:32.068: INFO: Pod pod-17bc7146-fc47-415c-b8dd-8aec3915b0d8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 21 00:05:32.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-383" for this suite. • [SLOW TEST:8.630 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":659,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 21 00:05:32.085: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating a pod to test downward api env vars Jan 21 00:05:32.234: INFO: Waiting up to 5m0s for pod "downward-api-b97aaa00-d471-482b-8709-3acff4803a76" in namespace "downward-api-2407" to be "success or failure" Jan 21 00:05:32.331: INFO: Pod "downward-api-b97aaa00-d471-482b-8709-3acff4803a76": Phase="Pending", Reason="", readiness=false. Elapsed: 96.953592ms Jan 21 00:05:34.340: INFO: Pod "downward-api-b97aaa00-d471-482b-8709-3acff4803a76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.105170106s Jan 21 00:05:36.347: INFO: Pod "downward-api-b97aaa00-d471-482b-8709-3acff4803a76": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112257631s Jan 21 00:05:38.357: INFO: Pod "downward-api-b97aaa00-d471-482b-8709-3acff4803a76": Phase="Pending", Reason="", readiness=false. Elapsed: 6.122529948s Jan 21 00:05:40.372: INFO: Pod "downward-api-b97aaa00-d471-482b-8709-3acff4803a76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.137752832s STEP: Saw pod success Jan 21 00:05:40.373: INFO: Pod "downward-api-b97aaa00-d471-482b-8709-3acff4803a76" satisfied condition "success or failure" Jan 21 00:05:40.380: INFO: Trying to get logs from node jerma-node pod downward-api-b97aaa00-d471-482b-8709-3acff4803a76 container dapi-container: STEP: delete the pod Jan 21 00:05:40.427: INFO: Waiting for pod downward-api-b97aaa00-d471-482b-8709-3acff4803a76 to disappear Jan 21 00:05:40.434: INFO: Pod downward-api-b97aaa00-d471-482b-8709-3acff4803a76 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 21 00:05:40.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2407" for this suite. • [SLOW TEST:8.366 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":663,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 21 00:05:40.452: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name secret-test-c80f5c7f-ff0a-43e5-8fdf-e83c546fe90c STEP: Creating a pod to test consume secrets Jan 21 00:05:40.577: INFO: Waiting up to 5m0s for pod "pod-secrets-208b95b7-bd00-40f1-9e7f-c9367d301f70" in namespace "secrets-8170" to be "success or failure" Jan 21 00:05:40.596: INFO: Pod "pod-secrets-208b95b7-bd00-40f1-9e7f-c9367d301f70": Phase="Pending", Reason="", readiness=false. Elapsed: 18.868744ms Jan 21 00:05:42.605: INFO: Pod "pod-secrets-208b95b7-bd00-40f1-9e7f-c9367d301f70": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027649523s Jan 21 00:05:44.615: INFO: Pod "pod-secrets-208b95b7-bd00-40f1-9e7f-c9367d301f70": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037141216s Jan 21 00:05:46.620: INFO: Pod "pod-secrets-208b95b7-bd00-40f1-9e7f-c9367d301f70": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042705471s Jan 21 00:05:48.633: INFO: Pod "pod-secrets-208b95b7-bd00-40f1-9e7f-c9367d301f70": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055080176s STEP: Saw pod success Jan 21 00:05:48.633: INFO: Pod "pod-secrets-208b95b7-bd00-40f1-9e7f-c9367d301f70" satisfied condition "success or failure" Jan 21 00:05:48.638: INFO: Trying to get logs from node jerma-node pod pod-secrets-208b95b7-bd00-40f1-9e7f-c9367d301f70 container secret-volume-test: STEP: delete the pod Jan 21 00:05:48.688: INFO: Waiting for pod pod-secrets-208b95b7-bd00-40f1-9e7f-c9367d301f70 to disappear Jan 21 00:05:48.692: INFO: Pod pod-secrets-208b95b7-bd00-40f1-9e7f-c9367d301f70 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 21 00:05:48.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8170" for this suite. • [SLOW TEST:8.249 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":41,"skipped":670,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 21 00:05:48.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1465 STEP: creating an pod Jan 21 00:05:48.829: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-1183 -- logs-generator --log-lines-total 100 --run-duration 20s' Jan 21 00:05:49.016: INFO: stderr: "" Jan 21 00:05:49.017: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Waiting for log generator to start. Jan 21 00:05:49.017: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Jan 21 00:05:49.017: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-1183" to be "running and ready, or succeeded" Jan 21 00:05:49.033: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 15.533707ms Jan 21 00:05:51.052: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034635427s Jan 21 00:05:53.072: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054840181s Jan 21 00:05:55.082: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064654785s Jan 21 00:05:57.093: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.075980518s Jan 21 00:05:57.094: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Jan 21 00:05:57.094: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Jan 21 00:05:57.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1183' Jan 21 00:05:57.308: INFO: stderr: "" Jan 21 00:05:57.308: INFO: stdout: "I0121 00:05:54.929160 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/52v 316\nI0121 00:05:55.129588 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/x2l2 320\nI0121 00:05:55.329658 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/4wj 487\nI0121 00:05:55.529470 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/mmb 434\nI0121 00:05:55.729487 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/vgx 378\nI0121 00:05:55.929852 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/lfdk 553\nI0121 00:05:56.129578 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/hvc 255\nI0121 00:05:56.329527 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/d7fs 298\nI0121 00:05:56.529707 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/clq 566\nI0121 00:05:56.729939 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/bgjk 222\nI0121 00:05:56.929480 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/8fv6 416\nI0121 00:05:57.129740 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/m7z 326\n" STEP: limiting log lines Jan 21 00:05:57.309: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1183 --tail=1' Jan 21 00:05:57.510: INFO: stderr: "" Jan 21 00:05:57.510: INFO: stdout: "I0121 00:05:57.329440 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/r4m5 263\n" Jan 21 00:05:57.511: INFO: got output "I0121 00:05:57.329440 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/r4m5 263\n" STEP: limiting log bytes Jan 21 00:05:57.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1183 --limit-bytes=1' Jan 21 00:05:57.614: INFO: stderr: "" Jan 21 00:05:57.614: INFO: stdout: "I" Jan 21 00:05:57.614: INFO: got output "I" STEP: exposing timestamps Jan 21 00:05:57.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1183 --tail=1 --timestamps' Jan 21 00:05:57.706: INFO: stderr: "" Jan 21 00:05:57.706: INFO: stdout: "2020-01-21T00:05:57.529892137Z I0121 00:05:57.529418 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/7q6 578\n" Jan 21 00:05:57.706: INFO: got output "2020-01-21T00:05:57.529892137Z I0121 00:05:57.529418 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/7q6 578\n" STEP: restricting to a time range Jan 21 00:06:00.207: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1183 --since=1s' Jan 21 00:06:00.413: INFO: stderr: "" Jan 21 00:06:00.413: INFO: stdout: "I0121 00:05:59.529641 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/426 362\nI0121 00:05:59.729443 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/zm8g 263\nI0121 00:05:59.929485 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/stw 428\nI0121 00:06:00.129664 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/zgf7 370\nI0121 00:06:00.329448 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/kube-system/pods/7qcn 225\n" Jan 21 00:06:00.414: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-1183 --since=24h' Jan 21 00:06:00.633: INFO: stderr: "" Jan 21 00:06:00.633: INFO: stdout: "I0121 00:05:54.929160 1 logs_generator.go:76] 0 GET /api/v1/namespaces/kube-system/pods/52v 316\nI0121 00:05:55.129588 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/kube-system/pods/x2l2 320\nI0121 00:05:55.329658 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/4wj 487\nI0121 00:05:55.529470 1 logs_generator.go:76] 3 GET /api/v1/namespaces/ns/pods/mmb 434\nI0121 00:05:55.729487 1 logs_generator.go:76] 4 PUT /api/v1/namespaces/default/pods/vgx 378\nI0121 00:05:55.929852 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/lfdk 553\nI0121 00:05:56.129578 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/hvc 255\nI0121 00:05:56.329527 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/default/pods/d7fs 298\nI0121 00:05:56.529707 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/clq 566\nI0121 00:05:56.729939 1 logs_generator.go:76] 9 POST /api/v1/namespaces/kube-system/pods/bgjk 222\nI0121 00:05:56.929480 1 logs_generator.go:76] 10 GET /api/v1/namespaces/kube-system/pods/8fv6 416\nI0121 00:05:57.129740 1 logs_generator.go:76] 11 POST /api/v1/namespaces/kube-system/pods/m7z 326\nI0121 00:05:57.329440 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/kube-system/pods/r4m5 263\nI0121 00:05:57.529418 1 logs_generator.go:76] 13 GET /api/v1/namespaces/default/pods/7q6 578\nI0121 00:05:57.729523 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/rzh 292\nI0121 00:05:57.929726 1 logs_generator.go:76] 15 POST /api/v1/namespaces/kube-system/pods/vnh9 477\nI0121 00:05:58.129508 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/b2m 571\nI0121 00:05:58.329568 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/jqqm 344\nI0121 00:05:58.529841 1 logs_generator.go:76] 18 GET /api/v1/namespaces/kube-system/pods/lctp 280\nI0121 00:05:58.729814 1 logs_generator.go:76] 19 GET /api/v1/namespaces/ns/pods/r8xq 408\nI0121 00:05:58.929620 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/fdj 503\nI0121 00:05:59.129705 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/zps 225\nI0121 00:05:59.329620 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/slz 378\nI0121 00:05:59.529641 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/426 362\nI0121 00:05:59.729443 1 logs_generator.go:76] 24 POST /api/v1/namespaces/kube-system/pods/zm8g 263\nI0121 00:05:59.929485 1 logs_generator.go:76] 25 GET /api/v1/namespaces/ns/pods/stw 428\nI0121 00:06:00.129664 1 logs_generator.go:76] 26 PUT /api/v1/namespaces/ns/pods/zgf7 370\nI0121 00:06:00.329448 1 logs_generator.go:76] 27 PUT /api/v1/namespaces/kube-system/pods/7qcn 225\nI0121 00:06:00.529728 1 logs_generator.go:76] 28 PUT /api/v1/namespaces/ns/pods/hlh 374\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1471 Jan 21 00:06:00.634: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-1183' Jan 21 00:06:12.366: INFO: stderr: "" Jan 21 00:06:12.367: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 21 00:06:12.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1183" for this suite. • [SLOW TEST:23.768 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1461 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":42,"skipped":694,"failed":0} S ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 21 00:06:12.471: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 STEP: Creating secret with name s-test-opt-del-2eb4cd79-57ab-41dd-8d8f-8ca76312e69f STEP: Creating secret with name s-test-opt-upd-2ad4e1df-ba6c-4a99-b87a-e2be8cc0ba13 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-2eb4cd79-57ab-41dd-8d8f-8ca76312e69f STEP: Updating secret s-test-opt-upd-2ad4e1df-ba6c-4a99-b87a-e2be8cc0ba13 STEP: Creating secret with name s-test-opt-create-8897f52c-f4d2-43ca-ab98-51982da54984 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175 Jan 21 00:07:41.853: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-906" for this suite. • [SLOW TEST:89.411 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174 STEP: Creating a kubernetes client Jan 21 00:07:41.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685 Jan 21 00:07:42.065: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/:
alternatives.log
apt/
... (200; 12.384552ms)
Jan 21 00:07:42.069: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.410927ms)
Jan 21 00:07:42.074: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.385072ms)
Jan 21 00:07:42.078: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.758634ms)
Jan 21 00:07:42.084: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 5.444227ms)
Jan 21 00:07:42.089: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 5.147778ms)
Jan 21 00:07:42.098: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 8.69101ms)
Jan 21 00:07:42.105: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 6.838925ms)
Jan 21 00:07:42.121: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 16.457943ms)
Jan 21 00:07:42.150: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 27.965921ms)
Jan 21 00:07:42.197: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 46.56377ms)
Jan 21 00:07:42.203: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 5.789411ms)
Jan 21 00:07:42.207: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 4.03453ms)
Jan 21 00:07:42.210: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.518251ms)
Jan 21 00:07:42.213: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 2.899386ms)
Jan 21 00:07:42.217: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.420996ms)
Jan 21 00:07:42.220: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.042219ms)
Jan 21 00:07:42.224: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.807137ms)
Jan 21 00:07:42.227: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 2.977544ms)
Jan 21 00:07:42.231: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
apt/
... (200; 3.810402ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:07:42.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-1855" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource  [Conformance]","total":278,"completed":44,"skipped":725,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:07:42.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test hostPath mode
Jan 21 00:07:42.414: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2636" to be "success or failure"
Jan 21 00:07:42.425: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.907915ms
Jan 21 00:07:44.431: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016987606s
Jan 21 00:07:46.438: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023881509s
Jan 21 00:07:48.447: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033215772s
Jan 21 00:07:50.468: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.054381038s
Jan 21 00:07:53.228: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.814141977s
Jan 21 00:07:55.244: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.830574538s
STEP: Saw pod success
Jan 21 00:07:55.244: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Jan 21 00:07:55.248: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Jan 21 00:07:55.326: INFO: Waiting for pod pod-host-path-test to disappear
Jan 21 00:07:55.395: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:07:55.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-2636" for this suite.

• [SLOW TEST:13.175 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":742,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:07:55.421: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3739.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-3739.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3739.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-3739.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3739.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3739.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3739.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-3739.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3739.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-3739.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3739.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 35.136.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.136.35_udp@PTR;check="$$(dig +tcp +noall +answer +search 35.136.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.136.35_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-3739.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-3739.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-3739.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-3739.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-3739.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-3739.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-3739.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-3739.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-3739.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-3739.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3739.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 35.136.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.136.35_udp@PTR;check="$$(dig +tcp +noall +answer +search 35.136.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.136.35_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 21 00:08:05.850: INFO: Unable to read wheezy_udp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:05.857: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:05.861: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:05.867: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:05.897: INFO: Unable to read jessie_udp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:05.900: INFO: Unable to read jessie_tcp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:05.905: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:05.909: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:05.936: INFO: Lookups using dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1 failed for: [wheezy_udp@dns-test-service.dns-3739.svc.cluster.local wheezy_tcp@dns-test-service.dns-3739.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local jessie_udp@dns-test-service.dns-3739.svc.cluster.local jessie_tcp@dns-test-service.dns-3739.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local]

Jan 21 00:08:10.947: INFO: Unable to read wheezy_udp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:10.954: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:10.962: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:10.966: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:10.989: INFO: Unable to read jessie_udp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:10.992: INFO: Unable to read jessie_tcp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:10.994: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:10.997: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:11.010: INFO: Lookups using dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1 failed for: [wheezy_udp@dns-test-service.dns-3739.svc.cluster.local wheezy_tcp@dns-test-service.dns-3739.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local jessie_udp@dns-test-service.dns-3739.svc.cluster.local jessie_tcp@dns-test-service.dns-3739.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local]

Jan 21 00:08:15.945: INFO: Unable to read wheezy_udp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:15.952: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:15.957: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:15.961: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:15.988: INFO: Unable to read jessie_udp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:15.991: INFO: Unable to read jessie_tcp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:15.993: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:15.995: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:16.012: INFO: Lookups using dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1 failed for: [wheezy_udp@dns-test-service.dns-3739.svc.cluster.local wheezy_tcp@dns-test-service.dns-3739.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local jessie_udp@dns-test-service.dns-3739.svc.cluster.local jessie_tcp@dns-test-service.dns-3739.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local]

Jan 21 00:08:20.944: INFO: Unable to read wheezy_udp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:20.948: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:20.968: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:20.971: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:20.993: INFO: Unable to read jessie_udp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:20.996: INFO: Unable to read jessie_tcp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:20.999: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:21.002: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:21.018: INFO: Lookups using dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1 failed for: [wheezy_udp@dns-test-service.dns-3739.svc.cluster.local wheezy_tcp@dns-test-service.dns-3739.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local jessie_udp@dns-test-service.dns-3739.svc.cluster.local jessie_tcp@dns-test-service.dns-3739.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local]

Jan 21 00:08:25.950: INFO: Unable to read wheezy_udp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:25.956: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:25.961: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:25.966: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:25.999: INFO: Unable to read jessie_udp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:26.002: INFO: Unable to read jessie_tcp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:26.006: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:26.010: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:26.028: INFO: Lookups using dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1 failed for: [wheezy_udp@dns-test-service.dns-3739.svc.cluster.local wheezy_tcp@dns-test-service.dns-3739.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local jessie_udp@dns-test-service.dns-3739.svc.cluster.local jessie_tcp@dns-test-service.dns-3739.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local]

Jan 21 00:08:30.948: INFO: Unable to read wheezy_udp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:30.954: INFO: Unable to read wheezy_tcp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:30.968: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:30.979: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:31.021: INFO: Unable to read jessie_udp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:31.023: INFO: Unable to read jessie_tcp@dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:31.025: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:31.029: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local from pod dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1: the server could not find the requested resource (get pods dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1)
Jan 21 00:08:31.048: INFO: Lookups using dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1 failed for: [wheezy_udp@dns-test-service.dns-3739.svc.cluster.local wheezy_tcp@dns-test-service.dns-3739.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local jessie_udp@dns-test-service.dns-3739.svc.cluster.local jessie_tcp@dns-test-service.dns-3739.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-3739.svc.cluster.local]

Jan 21 00:08:36.040: INFO: DNS probes using dns-3739/dns-test-893d36eb-ef46-4222-9dd4-57e6267beab1 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:08:36.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-3739" for this suite.

• [SLOW TEST:41.035 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for services  [Conformance]","total":278,"completed":46,"skipped":779,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:08:36.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-04751fc6-96d5-4176-83a1-b6d37cfc6993
STEP: Creating a pod to test consume configMaps
Jan 21 00:08:36.781: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7ab8e775-9086-4681-a875-b92939f386aa" in namespace "projected-5064" to be "success or failure"
Jan 21 00:08:36.851: INFO: Pod "pod-projected-configmaps-7ab8e775-9086-4681-a875-b92939f386aa": Phase="Pending", Reason="", readiness=false. Elapsed: 70.12399ms
Jan 21 00:08:38.864: INFO: Pod "pod-projected-configmaps-7ab8e775-9086-4681-a875-b92939f386aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.083021709s
Jan 21 00:08:40.874: INFO: Pod "pod-projected-configmaps-7ab8e775-9086-4681-a875-b92939f386aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.092911406s
Jan 21 00:08:42.882: INFO: Pod "pod-projected-configmaps-7ab8e775-9086-4681-a875-b92939f386aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100662829s
Jan 21 00:08:44.980: INFO: Pod "pod-projected-configmaps-7ab8e775-9086-4681-a875-b92939f386aa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.199078632s
Jan 21 00:08:46.988: INFO: Pod "pod-projected-configmaps-7ab8e775-9086-4681-a875-b92939f386aa": Phase="Pending", Reason="", readiness=false. Elapsed: 10.206840607s
Jan 21 00:08:48.998: INFO: Pod "pod-projected-configmaps-7ab8e775-9086-4681-a875-b92939f386aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.216536308s
STEP: Saw pod success
Jan 21 00:08:48.998: INFO: Pod "pod-projected-configmaps-7ab8e775-9086-4681-a875-b92939f386aa" satisfied condition "success or failure"
Jan 21 00:08:49.003: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-7ab8e775-9086-4681-a875-b92939f386aa container projected-configmap-volume-test: 
STEP: delete the pod
Jan 21 00:08:49.057: INFO: Waiting for pod pod-projected-configmaps-7ab8e775-9086-4681-a875-b92939f386aa to disappear
Jan 21 00:08:49.069: INFO: Pod pod-projected-configmaps-7ab8e775-9086-4681-a875-b92939f386aa no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:08:49.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5064" for this suite.

• [SLOW TEST:12.640 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":821,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run default 
  should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:08:49.098: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1597
[It] should create an rc or deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 21 00:08:49.192: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7419'
Jan 21 00:08:49.402: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 21 00:08:49.402: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created
[AfterEach] Kubectl run default
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1603
Jan 21 00:08:51.512: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-7419'
Jan 21 00:08:51.704: INFO: stderr: ""
Jan 21 00:08:51.704: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:08:51.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7419" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image  [Conformance]","total":278,"completed":48,"skipped":854,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:08:51.761: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-ebe37e2a-21e6-4a4f-8b09-662ffe66cee2
STEP: Creating a pod to test consume secrets
Jan 21 00:08:51.927: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-5eda3e78-2946-42fb-818e-fd7dbd494bc2" in namespace "projected-5900" to be "success or failure"
Jan 21 00:08:51.936: INFO: Pod "pod-projected-secrets-5eda3e78-2946-42fb-818e-fd7dbd494bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.868533ms
Jan 21 00:08:53.946: INFO: Pod "pod-projected-secrets-5eda3e78-2946-42fb-818e-fd7dbd494bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019453876s
Jan 21 00:08:55.956: INFO: Pod "pod-projected-secrets-5eda3e78-2946-42fb-818e-fd7dbd494bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0286859s
Jan 21 00:08:57.964: INFO: Pod "pod-projected-secrets-5eda3e78-2946-42fb-818e-fd7dbd494bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037572214s
Jan 21 00:08:59.975: INFO: Pod "pod-projected-secrets-5eda3e78-2946-42fb-818e-fd7dbd494bc2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.047667646s
Jan 21 00:09:01.984: INFO: Pod "pod-projected-secrets-5eda3e78-2946-42fb-818e-fd7dbd494bc2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.056976266s
STEP: Saw pod success
Jan 21 00:09:01.984: INFO: Pod "pod-projected-secrets-5eda3e78-2946-42fb-818e-fd7dbd494bc2" satisfied condition "success or failure"
Jan 21 00:09:01.988: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-5eda3e78-2946-42fb-818e-fd7dbd494bc2 container projected-secret-volume-test: 
STEP: delete the pod
Jan 21 00:09:02.068: INFO: Waiting for pod pod-projected-secrets-5eda3e78-2946-42fb-818e-fd7dbd494bc2 to disappear
Jan 21 00:09:02.094: INFO: Pod pod-projected-secrets-5eda3e78-2946-42fb-818e-fd7dbd494bc2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:09:02.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5900" for this suite.

• [SLOW TEST:10.407 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":872,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:09:02.170: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 21 00:09:02.237: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1739fbc6-e049-4a25-9e68-d687951a069a" in namespace "downward-api-423" to be "success or failure"
Jan 21 00:09:02.244: INFO: Pod "downwardapi-volume-1739fbc6-e049-4a25-9e68-d687951a069a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.592214ms
Jan 21 00:09:04.253: INFO: Pod "downwardapi-volume-1739fbc6-e049-4a25-9e68-d687951a069a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016190539s
Jan 21 00:09:06.260: INFO: Pod "downwardapi-volume-1739fbc6-e049-4a25-9e68-d687951a069a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022987643s
Jan 21 00:09:08.279: INFO: Pod "downwardapi-volume-1739fbc6-e049-4a25-9e68-d687951a069a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041350737s
Jan 21 00:09:10.286: INFO: Pod "downwardapi-volume-1739fbc6-e049-4a25-9e68-d687951a069a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.048517879s
STEP: Saw pod success
Jan 21 00:09:10.286: INFO: Pod "downwardapi-volume-1739fbc6-e049-4a25-9e68-d687951a069a" satisfied condition "success or failure"
Jan 21 00:09:10.292: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1739fbc6-e049-4a25-9e68-d687951a069a container client-container: 
STEP: delete the pod
Jan 21 00:09:10.467: INFO: Waiting for pod downwardapi-volume-1739fbc6-e049-4a25-9e68-d687951a069a to disappear
Jan 21 00:09:10.508: INFO: Pod downwardapi-volume-1739fbc6-e049-4a25-9e68-d687951a069a no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:09:10.509: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-423" for this suite.

• [SLOW TEST:8.358 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":50,"skipped":884,"failed":0}
SS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:09:10.531: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-7qtd
STEP: Creating a pod to test atomic-volume-subpath
Jan 21 00:09:10.894: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7qtd" in namespace "subpath-3132" to be "success or failure"
Jan 21 00:09:10.938: INFO: Pod "pod-subpath-test-configmap-7qtd": Phase="Pending", Reason="", readiness=false. Elapsed: 43.746115ms
Jan 21 00:09:12.950: INFO: Pod "pod-subpath-test-configmap-7qtd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05566883s
Jan 21 00:09:14.961: INFO: Pod "pod-subpath-test-configmap-7qtd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067192785s
Jan 21 00:09:16.967: INFO: Pod "pod-subpath-test-configmap-7qtd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.073091245s
Jan 21 00:09:18.974: INFO: Pod "pod-subpath-test-configmap-7qtd": Phase="Pending", Reason="", readiness=false. Elapsed: 8.08000952s
Jan 21 00:09:20.980: INFO: Pod "pod-subpath-test-configmap-7qtd": Phase="Running", Reason="", readiness=true. Elapsed: 10.085732975s
Jan 21 00:09:22.987: INFO: Pod "pod-subpath-test-configmap-7qtd": Phase="Running", Reason="", readiness=true. Elapsed: 12.092803876s
Jan 21 00:09:24.995: INFO: Pod "pod-subpath-test-configmap-7qtd": Phase="Running", Reason="", readiness=true. Elapsed: 14.100701729s
Jan 21 00:09:27.002: INFO: Pod "pod-subpath-test-configmap-7qtd": Phase="Running", Reason="", readiness=true. Elapsed: 16.107810803s
Jan 21 00:09:29.008: INFO: Pod "pod-subpath-test-configmap-7qtd": Phase="Running", Reason="", readiness=true. Elapsed: 18.114424205s
Jan 21 00:09:31.013: INFO: Pod "pod-subpath-test-configmap-7qtd": Phase="Running", Reason="", readiness=true. Elapsed: 20.119493649s
Jan 21 00:09:33.019: INFO: Pod "pod-subpath-test-configmap-7qtd": Phase="Running", Reason="", readiness=true. Elapsed: 22.125079154s
Jan 21 00:09:35.026: INFO: Pod "pod-subpath-test-configmap-7qtd": Phase="Running", Reason="", readiness=true. Elapsed: 24.131960663s
Jan 21 00:09:37.035: INFO: Pod "pod-subpath-test-configmap-7qtd": Phase="Running", Reason="", readiness=true. Elapsed: 26.140949771s
Jan 21 00:09:39.040: INFO: Pod "pod-subpath-test-configmap-7qtd": Phase="Running", Reason="", readiness=true. Elapsed: 28.145806444s
Jan 21 00:09:41.049: INFO: Pod "pod-subpath-test-configmap-7qtd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.155224485s
STEP: Saw pod success
Jan 21 00:09:41.049: INFO: Pod "pod-subpath-test-configmap-7qtd" satisfied condition "success or failure"
Jan 21 00:09:41.053: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-7qtd container test-container-subpath-configmap-7qtd: 
STEP: delete the pod
Jan 21 00:09:41.229: INFO: Waiting for pod pod-subpath-test-configmap-7qtd to disappear
Jan 21 00:09:41.241: INFO: Pod pod-subpath-test-configmap-7qtd no longer exists
STEP: Deleting pod pod-subpath-test-configmap-7qtd
Jan 21 00:09:41.242: INFO: Deleting pod "pod-subpath-test-configmap-7qtd" in namespace "subpath-3132"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:09:41.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-3132" for this suite.

• [SLOW TEST:30.723 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":51,"skipped":886,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:09:41.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: set up a multi version CRD
Jan 21 00:09:41.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:10:02.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5382" for this suite.

• [SLOW TEST:21.191 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":52,"skipped":921,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:10:02.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 21 00:10:11.207: INFO: Successfully updated pod "pod-update-f9f9cd02-5a44-430a-a810-e9e267fd371b"
STEP: verifying the updated pod is in kubernetes
Jan 21 00:10:11.252: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:10:11.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-8188" for this suite.

• [SLOW TEST:8.812 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":968,"failed":0}
S
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:10:11.262: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating api versions
Jan 21 00:10:11.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Jan 21 00:10:11.565: INFO: stderr: ""
Jan 21 00:10:11.565: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:10:11.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7409" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions  [Conformance]","total":278,"completed":54,"skipped":969,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:10:11.588: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Jan 21 00:10:11.744: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7646 /api/v1/namespaces/watch-7646/configmaps/e2e-watch-test-configmap-a a4d2a532-279f-40eb-872e-c5d8a2d9cae3 3290152 0 2020-01-21 00:10:11 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 21 00:10:11.745: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7646 /api/v1/namespaces/watch-7646/configmaps/e2e-watch-test-configmap-a a4d2a532-279f-40eb-872e-c5d8a2d9cae3 3290152 0 2020-01-21 00:10:11 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Jan 21 00:10:21.817: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7646 /api/v1/namespaces/watch-7646/configmaps/e2e-watch-test-configmap-a a4d2a532-279f-40eb-872e-c5d8a2d9cae3 3290191 0 2020-01-21 00:10:11 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 21 00:10:21.817: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7646 /api/v1/namespaces/watch-7646/configmaps/e2e-watch-test-configmap-a a4d2a532-279f-40eb-872e-c5d8a2d9cae3 3290191 0 2020-01-21 00:10:11 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Jan 21 00:10:31.831: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7646 /api/v1/namespaces/watch-7646/configmaps/e2e-watch-test-configmap-a a4d2a532-279f-40eb-872e-c5d8a2d9cae3 3290213 0 2020-01-21 00:10:11 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 21 00:10:31.832: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7646 /api/v1/namespaces/watch-7646/configmaps/e2e-watch-test-configmap-a a4d2a532-279f-40eb-872e-c5d8a2d9cae3 3290213 0 2020-01-21 00:10:11 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Jan 21 00:10:41.845: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7646 /api/v1/namespaces/watch-7646/configmaps/e2e-watch-test-configmap-a a4d2a532-279f-40eb-872e-c5d8a2d9cae3 3290241 0 2020-01-21 00:10:11 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 21 00:10:41.846: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-7646 /api/v1/namespaces/watch-7646/configmaps/e2e-watch-test-configmap-a a4d2a532-279f-40eb-872e-c5d8a2d9cae3 3290241 0 2020-01-21 00:10:11 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Jan 21 00:10:51.861: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7646 /api/v1/namespaces/watch-7646/configmaps/e2e-watch-test-configmap-b e3e24f23-c36e-4a64-b25b-01c8dc422419 3290267 0 2020-01-21 00:10:51 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 21 00:10:51.861: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7646 /api/v1/namespaces/watch-7646/configmaps/e2e-watch-test-configmap-b e3e24f23-c36e-4a64-b25b-01c8dc422419 3290267 0 2020-01-21 00:10:51 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Jan 21 00:11:01.877: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7646 /api/v1/namespaces/watch-7646/configmaps/e2e-watch-test-configmap-b e3e24f23-c36e-4a64-b25b-01c8dc422419 3290291 0 2020-01-21 00:10:51 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 21 00:11:01.878: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-7646 /api/v1/namespaces/watch-7646/configmaps/e2e-watch-test-configmap-b e3e24f23-c36e-4a64-b25b-01c8dc422419 3290291 0 2020-01-21 00:10:51 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:11:11.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7646" for this suite.

• [SLOW TEST:60.304 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":55,"skipped":982,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:11:11.893: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Jan 21 00:11:12.020: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7952 /api/v1/namespaces/watch-7952/configmaps/e2e-watch-test-watch-closed d2981908-dbc8-4b9b-82bf-f3c23ed8aad3 3290320 0 2020-01-21 00:11:11 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 21 00:11:12.020: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7952 /api/v1/namespaces/watch-7952/configmaps/e2e-watch-test-watch-closed d2981908-dbc8-4b9b-82bf-f3c23ed8aad3 3290321 0 2020-01-21 00:11:11 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Jan 21 00:11:12.045: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7952 /api/v1/namespaces/watch-7952/configmaps/e2e-watch-test-watch-closed d2981908-dbc8-4b9b-82bf-f3c23ed8aad3 3290322 0 2020-01-21 00:11:11 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 21 00:11:12.046: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7952 /api/v1/namespaces/watch-7952/configmaps/e2e-watch-test-watch-closed d2981908-dbc8-4b9b-82bf-f3c23ed8aad3 3290323 0 2020-01-21 00:11:11 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:11:12.046: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7952" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":56,"skipped":990,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:11:12.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 21 00:11:12.226: INFO: Waiting up to 5m0s for pod "downward-api-53813b18-e6e3-47dd-8630-c59710a5d373" in namespace "downward-api-8707" to be "success or failure"
Jan 21 00:11:12.231: INFO: Pod "downward-api-53813b18-e6e3-47dd-8630-c59710a5d373": Phase="Pending", Reason="", readiness=false. Elapsed: 4.974481ms
Jan 21 00:11:14.243: INFO: Pod "downward-api-53813b18-e6e3-47dd-8630-c59710a5d373": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016740955s
Jan 21 00:11:16.248: INFO: Pod "downward-api-53813b18-e6e3-47dd-8630-c59710a5d373": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02220265s
Jan 21 00:11:18.258: INFO: Pod "downward-api-53813b18-e6e3-47dd-8630-c59710a5d373": Phase="Pending", Reason="", readiness=false. Elapsed: 6.032094836s
Jan 21 00:11:20.726: INFO: Pod "downward-api-53813b18-e6e3-47dd-8630-c59710a5d373": Phase="Pending", Reason="", readiness=false. Elapsed: 8.500329981s
Jan 21 00:11:22.734: INFO: Pod "downward-api-53813b18-e6e3-47dd-8630-c59710a5d373": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.507913152s
STEP: Saw pod success
Jan 21 00:11:22.734: INFO: Pod "downward-api-53813b18-e6e3-47dd-8630-c59710a5d373" satisfied condition "success or failure"
Jan 21 00:11:22.739: INFO: Trying to get logs from node jerma-node pod downward-api-53813b18-e6e3-47dd-8630-c59710a5d373 container dapi-container: 
STEP: delete the pod
Jan 21 00:11:23.185: INFO: Waiting for pod downward-api-53813b18-e6e3-47dd-8630-c59710a5d373 to disappear
Jan 21 00:11:23.204: INFO: Pod downward-api-53813b18-e6e3-47dd-8630-c59710a5d373 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:11:23.205: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8707" for this suite.

• [SLOW TEST:11.168 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":1002,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:11:23.245: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with configMap that has name projected-configmap-test-upd-89e580ae-d10d-4917-8cc5-3dced3ad8b3f
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-89e580ae-d10d-4917-8cc5-3dced3ad8b3f
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:12:42.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4363" for this suite.

• [SLOW TEST:79.690 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":1018,"failed":0}
SSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:12:42.936: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:12:53.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-2177" for this suite.

• [SLOW TEST:10.328 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":59,"skipped":1028,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:12:53.265: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Jan 21 00:13:08.050: INFO: Successfully updated pod "adopt-release-6g4v7"
STEP: Checking that the Job readopts the Pod
Jan 21 00:13:08.050: INFO: Waiting up to 15m0s for pod "adopt-release-6g4v7" in namespace "job-3452" to be "adopted"
Jan 21 00:13:08.058: INFO: Pod "adopt-release-6g4v7": Phase="Running", Reason="", readiness=true. Elapsed: 8.285289ms
Jan 21 00:13:10.068: INFO: Pod "adopt-release-6g4v7": Phase="Running", Reason="", readiness=true. Elapsed: 2.018418274s
Jan 21 00:13:10.069: INFO: Pod "adopt-release-6g4v7" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Jan 21 00:13:10.593: INFO: Successfully updated pod "adopt-release-6g4v7"
STEP: Checking that the Job releases the Pod
Jan 21 00:13:10.594: INFO: Waiting up to 15m0s for pod "adopt-release-6g4v7" in namespace "job-3452" to be "released"
Jan 21 00:13:10.605: INFO: Pod "adopt-release-6g4v7": Phase="Running", Reason="", readiness=true. Elapsed: 11.259008ms
Jan 21 00:13:12.613: INFO: Pod "adopt-release-6g4v7": Phase="Running", Reason="", readiness=true. Elapsed: 2.019097812s
Jan 21 00:13:12.613: INFO: Pod "adopt-release-6g4v7" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:13:12.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-3452" for this suite.

• [SLOW TEST:19.366 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":60,"skipped":1062,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:13:12.632: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 21 00:13:12.732: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ef1f2881-711a-43a0-b6e9-db371752e493" in namespace "projected-4479" to be "success or failure"
Jan 21 00:13:12.738: INFO: Pod "downwardapi-volume-ef1f2881-711a-43a0-b6e9-db371752e493": Phase="Pending", Reason="", readiness=false. Elapsed: 5.751863ms
Jan 21 00:13:14.756: INFO: Pod "downwardapi-volume-ef1f2881-711a-43a0-b6e9-db371752e493": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024147698s
Jan 21 00:13:16.764: INFO: Pod "downwardapi-volume-ef1f2881-711a-43a0-b6e9-db371752e493": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032238411s
Jan 21 00:13:18.769: INFO: Pod "downwardapi-volume-ef1f2881-711a-43a0-b6e9-db371752e493": Phase="Pending", Reason="", readiness=false. Elapsed: 6.037344166s
Jan 21 00:13:20.777: INFO: Pod "downwardapi-volume-ef1f2881-711a-43a0-b6e9-db371752e493": Phase="Pending", Reason="", readiness=false. Elapsed: 8.045472949s
Jan 21 00:13:22.787: INFO: Pod "downwardapi-volume-ef1f2881-711a-43a0-b6e9-db371752e493": Phase="Pending", Reason="", readiness=false. Elapsed: 10.0553169s
Jan 21 00:13:24.795: INFO: Pod "downwardapi-volume-ef1f2881-711a-43a0-b6e9-db371752e493": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.06267475s
STEP: Saw pod success
Jan 21 00:13:24.795: INFO: Pod "downwardapi-volume-ef1f2881-711a-43a0-b6e9-db371752e493" satisfied condition "success or failure"
Jan 21 00:13:24.799: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-ef1f2881-711a-43a0-b6e9-db371752e493 container client-container: 
STEP: delete the pod
Jan 21 00:13:24.930: INFO: Waiting for pod downwardapi-volume-ef1f2881-711a-43a0-b6e9-db371752e493 to disappear
Jan 21 00:13:24.964: INFO: Pod downwardapi-volume-ef1f2881-711a-43a0-b6e9-db371752e493 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:13:24.964: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4479" for this suite.

• [SLOW TEST:12.345 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":1104,"failed":0}
SSSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:13:24.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating Agnhost RC
Jan 21 00:13:25.106: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4568'
Jan 21 00:13:27.580: INFO: stderr: ""
Jan 21 00:13:27.580: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 21 00:13:28.590: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:13:28.590: INFO: Found 0 / 1
Jan 21 00:13:29.589: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:13:29.589: INFO: Found 0 / 1
Jan 21 00:13:30.590: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:13:30.591: INFO: Found 0 / 1
Jan 21 00:13:31.588: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:13:31.588: INFO: Found 0 / 1
Jan 21 00:13:32.591: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:13:32.592: INFO: Found 0 / 1
Jan 21 00:13:33.589: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:13:33.590: INFO: Found 0 / 1
Jan 21 00:13:34.615: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:13:34.616: INFO: Found 1 / 1
Jan 21 00:13:34.616: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Jan 21 00:13:34.624: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:13:34.625: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 21 00:13:34.625: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-9pjwk --namespace=kubectl-4568 -p {"metadata":{"annotations":{"x":"y"}}}'
Jan 21 00:13:34.796: INFO: stderr: ""
Jan 21 00:13:34.796: INFO: stdout: "pod/agnhost-master-9pjwk patched\n"
STEP: checking annotations
Jan 21 00:13:34.801: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:13:34.801: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:13:34.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4568" for this suite.

• [SLOW TEST:9.859 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1540
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc  [Conformance]","total":278,"completed":62,"skipped":1108,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:13:34.840: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-21fcb1c5-3c5f-42b4-a4a1-610fb52390b0
STEP: Creating a pod to test consume secrets
Jan 21 00:13:35.041: INFO: Waiting up to 5m0s for pod "pod-secrets-f3ec9090-d2d2-46a1-a768-8da6d942209e" in namespace "secrets-7067" to be "success or failure"
Jan 21 00:13:35.062: INFO: Pod "pod-secrets-f3ec9090-d2d2-46a1-a768-8da6d942209e": Phase="Pending", Reason="", readiness=false. Elapsed: 21.130023ms
Jan 21 00:13:37.071: INFO: Pod "pod-secrets-f3ec9090-d2d2-46a1-a768-8da6d942209e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030021633s
Jan 21 00:13:39.078: INFO: Pod "pod-secrets-f3ec9090-d2d2-46a1-a768-8da6d942209e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037053731s
Jan 21 00:13:41.086: INFO: Pod "pod-secrets-f3ec9090-d2d2-46a1-a768-8da6d942209e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044978344s
Jan 21 00:13:43.094: INFO: Pod "pod-secrets-f3ec9090-d2d2-46a1-a768-8da6d942209e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0527694s
Jan 21 00:13:45.101: INFO: Pod "pod-secrets-f3ec9090-d2d2-46a1-a768-8da6d942209e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.060218917s
STEP: Saw pod success
Jan 21 00:13:45.101: INFO: Pod "pod-secrets-f3ec9090-d2d2-46a1-a768-8da6d942209e" satisfied condition "success or failure"
Jan 21 00:13:45.104: INFO: Trying to get logs from node jerma-node pod pod-secrets-f3ec9090-d2d2-46a1-a768-8da6d942209e container secret-volume-test: 
STEP: delete the pod
Jan 21 00:13:45.145: INFO: Waiting for pod pod-secrets-f3ec9090-d2d2-46a1-a768-8da6d942209e to disappear
Jan 21 00:13:45.151: INFO: Pod pod-secrets-f3ec9090-d2d2-46a1-a768-8da6d942209e no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:13:45.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-7067" for this suite.

• [SLOW TEST:10.344 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":63,"skipped":1168,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:13:45.184: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod test-webserver-0a04319a-984c-4f5f-a01b-bba923541b95 in namespace container-probe-7224
Jan 21 00:13:59.396: INFO: Started pod test-webserver-0a04319a-984c-4f5f-a01b-bba923541b95 in namespace container-probe-7224
STEP: checking the pod's current state and verifying that restartCount is present
Jan 21 00:13:59.399: INFO: Initial restart count of pod test-webserver-0a04319a-984c-4f5f-a01b-bba923541b95 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:18:00.898: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7224" for this suite.

• [SLOW TEST:255.760 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":64,"skipped":1177,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:18:00.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Jan 21 00:18:13.204: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-8205 PodName:pod-sharedvolume-758c7dc9-9801-4731-a048-8623940e8263 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 00:18:13.204: INFO: >>> kubeConfig: /root/.kube/config
I0121 00:18:13.246631       8 log.go:172] (0xc0027546e0) (0xc0017320a0) Create stream
I0121 00:18:13.246768       8 log.go:172] (0xc0027546e0) (0xc0017320a0) Stream added, broadcasting: 1
I0121 00:18:13.250645       8 log.go:172] (0xc0027546e0) Reply frame received for 1
I0121 00:18:13.250694       8 log.go:172] (0xc0027546e0) (0xc00185a500) Create stream
I0121 00:18:13.250701       8 log.go:172] (0xc0027546e0) (0xc00185a500) Stream added, broadcasting: 3
I0121 00:18:13.251676       8 log.go:172] (0xc0027546e0) Reply frame received for 3
I0121 00:18:13.251697       8 log.go:172] (0xc0027546e0) (0xc0012fc000) Create stream
I0121 00:18:13.251705       8 log.go:172] (0xc0027546e0) (0xc0012fc000) Stream added, broadcasting: 5
I0121 00:18:13.252744       8 log.go:172] (0xc0027546e0) Reply frame received for 5
I0121 00:18:13.327046       8 log.go:172] (0xc0027546e0) Data frame received for 3
I0121 00:18:13.327110       8 log.go:172] (0xc00185a500) (3) Data frame handling
I0121 00:18:13.327128       8 log.go:172] (0xc00185a500) (3) Data frame sent
I0121 00:18:13.391448       8 log.go:172] (0xc0027546e0) (0xc0012fc000) Stream removed, broadcasting: 5
I0121 00:18:13.391606       8 log.go:172] (0xc0027546e0) Data frame received for 1
I0121 00:18:13.391636       8 log.go:172] (0xc0017320a0) (1) Data frame handling
I0121 00:18:13.391708       8 log.go:172] (0xc0017320a0) (1) Data frame sent
I0121 00:18:13.391762       8 log.go:172] (0xc0027546e0) (0xc00185a500) Stream removed, broadcasting: 3
I0121 00:18:13.391818       8 log.go:172] (0xc0027546e0) (0xc0017320a0) Stream removed, broadcasting: 1
I0121 00:18:13.391856       8 log.go:172] (0xc0027546e0) Go away received
I0121 00:18:13.392157       8 log.go:172] (0xc0027546e0) (0xc0017320a0) Stream removed, broadcasting: 1
I0121 00:18:13.392193       8 log.go:172] (0xc0027546e0) (0xc00185a500) Stream removed, broadcasting: 3
I0121 00:18:13.392221       8 log.go:172] (0xc0027546e0) (0xc0012fc000) Stream removed, broadcasting: 5
Jan 21 00:18:13.392: INFO: Exec stderr: ""
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:18:13.392: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8205" for this suite.

• [SLOW TEST:12.456 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":65,"skipped":1207,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:18:13.404: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override command
Jan 21 00:18:13.545: INFO: Waiting up to 5m0s for pod "client-containers-8f7a5a03-4949-4288-a9b6-2c962b4e509d" in namespace "containers-4774" to be "success or failure"
Jan 21 00:18:13.548: INFO: Pod "client-containers-8f7a5a03-4949-4288-a9b6-2c962b4e509d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.053966ms
Jan 21 00:18:15.554: INFO: Pod "client-containers-8f7a5a03-4949-4288-a9b6-2c962b4e509d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008862416s
Jan 21 00:18:17.559: INFO: Pod "client-containers-8f7a5a03-4949-4288-a9b6-2c962b4e509d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.014759305s
Jan 21 00:18:19.571: INFO: Pod "client-containers-8f7a5a03-4949-4288-a9b6-2c962b4e509d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.026660153s
Jan 21 00:18:21.579: INFO: Pod "client-containers-8f7a5a03-4949-4288-a9b6-2c962b4e509d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.034013009s
STEP: Saw pod success
Jan 21 00:18:21.579: INFO: Pod "client-containers-8f7a5a03-4949-4288-a9b6-2c962b4e509d" satisfied condition "success or failure"
Jan 21 00:18:21.583: INFO: Trying to get logs from node jerma-node pod client-containers-8f7a5a03-4949-4288-a9b6-2c962b4e509d container test-container: 
STEP: delete the pod
Jan 21 00:18:21.783: INFO: Waiting for pod client-containers-8f7a5a03-4949-4288-a9b6-2c962b4e509d to disappear
Jan 21 00:18:21.793: INFO: Pod client-containers-8f7a5a03-4949-4288-a9b6-2c962b4e509d no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:18:21.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-4774" for this suite.

• [SLOW TEST:8.450 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1245,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:18:21.855: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Jan 21 00:18:22.033: INFO: >>> kubeConfig: /root/.kube/config
Jan 21 00:18:25.648: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:18:40.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1382" for this suite.

• [SLOW TEST:18.603 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":67,"skipped":1266,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:18:40.460: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name cm-test-opt-del-aeb4c2cb-b887-4821-a462-1c8b6df0f8bd
STEP: Creating configMap with name cm-test-opt-upd-ac8d8f38-77d7-428c-a433-29f2523904ac
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-aeb4c2cb-b887-4821-a462-1c8b6df0f8bd
STEP: Updating configmap cm-test-opt-upd-ac8d8f38-77d7-428c-a433-29f2523904ac
STEP: Creating configMap with name cm-test-opt-create-d89799f3-874f-4d0d-9113-a435a848a119
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:20:03.893: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6787" for this suite.

• [SLOW TEST:83.449 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":68,"skipped":1294,"failed":0}
S
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:20:03.910: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:75
Jan 21 00:20:03.974: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the sample API server.
Jan 21 00:20:04.890: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Jan 21 00:20:07.165: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162805, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:20:09.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162805, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:20:11.182: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162805, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:20:13.172: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162805, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:20:15.223: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162805, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:20:17.180: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162805, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715162804, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-76974b4fff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:20:20.023: INFO: Waited 830.798942ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:66
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:20:20.941: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-3329" for this suite.

• [SLOW TEST:17.130 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.17 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":69,"skipped":1295,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:20:21.043: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:20:21.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5085'
Jan 21 00:20:21.873: INFO: stderr: ""
Jan 21 00:20:21.874: INFO: stdout: "replicationcontroller/agnhost-master created\n"
Jan 21 00:20:21.874: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5085'
Jan 21 00:20:22.572: INFO: stderr: ""
Jan 21 00:20:22.572: INFO: stdout: "service/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 21 00:20:23.582: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:20:23.582: INFO: Found 0 / 1
Jan 21 00:20:24.582: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:20:24.583: INFO: Found 0 / 1
Jan 21 00:20:25.597: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:20:25.597: INFO: Found 0 / 1
Jan 21 00:20:26.579: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:20:26.579: INFO: Found 0 / 1
Jan 21 00:20:27.585: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:20:27.585: INFO: Found 0 / 1
Jan 21 00:20:28.591: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:20:28.592: INFO: Found 0 / 1
Jan 21 00:20:29.581: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:20:29.581: INFO: Found 0 / 1
Jan 21 00:20:30.584: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:20:30.585: INFO: Found 0 / 1
Jan 21 00:20:31.581: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:20:31.581: INFO: Found 1 / 1
Jan 21 00:20:31.582: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 21 00:20:31.586: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:20:31.586: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 21 00:20:31.586: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-2nmc6 --namespace=kubectl-5085'
Jan 21 00:20:31.800: INFO: stderr: ""
Jan 21 00:20:31.801: INFO: stdout: "Name:         agnhost-master-2nmc6\nNamespace:    kubectl-5085\nPriority:     0\nNode:         jerma-node/10.96.2.250\nStart Time:   Tue, 21 Jan 2020 00:20:22 +0000\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nIPs:\n  IP:           10.44.0.1\nControlled By:  ReplicationController/agnhost-master\nContainers:\n  agnhost-master:\n    Container ID:   docker://462eff5ed7345066571e627c97749f38f7b698bacc050123d082ebb87ab8bfe4\n    Image:          gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Image ID:       docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 21 Jan 2020 00:20:30 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-j75vh (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-j75vh:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-j75vh\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-5085/agnhost-master-2nmc6 to jerma-node\n  Normal  Pulled     3s         kubelet, jerma-node  Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n  Normal  Created    1s         kubelet, jerma-node  Created container agnhost-master\n  Normal  Started    1s         kubelet, jerma-node  Started container agnhost-master\n"
Jan 21 00:20:31.801: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5085'
Jan 21 00:20:32.076: INFO: stderr: ""
Jan 21 00:20:32.077: INFO: stdout: "Name:         agnhost-master\nNamespace:    kubectl-5085\nSelector:     app=agnhost,role=master\nLabels:       app=agnhost\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=agnhost\n           role=master\n  Containers:\n   agnhost-master:\n    Image:        gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  11s   replication-controller  Created pod: agnhost-master-2nmc6\n"
Jan 21 00:20:32.077: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5085'
Jan 21 00:20:32.227: INFO: stderr: ""
Jan 21 00:20:32.227: INFO: stdout: "Name:              agnhost-master\nNamespace:         kubectl-5085\nLabels:            app=agnhost\n                   role=master\nAnnotations:       \nSelector:          app=agnhost,role=master\nType:              ClusterIP\nIP:                10.96.207.20\nPort:                6379/TCP\nTargetPort:        agnhost-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Jan 21 00:20:32.234: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Jan 21 00:20:32.347: INFO: stderr: ""
Jan 21 00:20:32.347: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 04 Jan 2020 11:59:52 +0000\nTaints:             \nUnschedulable:      false\nLease:\n  HolderIdentity:  jerma-node\n  AcquireTime:     \n  RenewTime:       Tue, 21 Jan 2020 00:20:24 +0000\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Sat, 04 Jan 2020 12:00:49 +0000   Sat, 04 Jan 2020 12:00:49 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Tue, 21 Jan 2020 00:17:27 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Tue, 21 Jan 2020 00:17:27 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Tue, 21 Jan 2020 00:17:27 +0000   Sat, 04 Jan 2020 11:59:52 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Tue, 21 Jan 2020 00:17:27 +0000   Sat, 04 Jan 2020 12:00:52 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.250\n  Hostname:    jerma-node\nCapacity:\n  cpu:                4\n  ephemeral-storage:  20145724Ki\n  hugepages-2Mi:      0\n  memory:             4039076Ki\n  pods:               110\nAllocatable:\n  cpu:                4\n  ephemeral-storage:  18566299208\n  hugepages-2Mi:      0\n  memory:             3936676Ki\n  pods:               110\nSystem Info:\n  Machine ID:                 bdc16344252549dd902c3a5d68b22f41\n  System UUID:                BDC16344-2525-49DD-902C-3A5D68B22F41\n  Boot ID:                    eec61fc4-8bf6-487f-8f93-ea9731fe757a\n  Kernel Version:             4.15.0-52-generic\n  OS Image:                   Ubuntu 18.04.2 LTS\n  Operating System:           linux\n  Architecture:               amd64\n  Container Runtime Version:  docker://18.9.7\n  Kubelet Version:            v1.17.0\n  Kube-Proxy Version:         v1.17.0\nNon-terminated Pods:          (3 in total)\n  Namespace                   Name                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                   ----                    ------------  ----------  ---------------  -------------  ---\n  kube-system                 kube-proxy-dsf66        0 (0%)        0 (0%)      0 (0%)           0 (0%)         16d\n  kube-system                 weave-net-kz8lv         20m (0%)      0 (0%)      0 (0%)           0 (0%)         16d\n  kubectl-5085                agnhost-master-2nmc6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Jan 21 00:20:32.347: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5085'
Jan 21 00:20:32.439: INFO: stderr: ""
Jan 21 00:20:32.439: INFO: stdout: "Name:         kubectl-5085\nLabels:       e2e-framework=kubectl\n              e2e-run=df456181-a889-47f1-b12e-0e629b75e9bc\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo LimitRange resource.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:20:32.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5085" for this suite.

• [SLOW TEST:11.407 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1155
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods  [Conformance]","total":278,"completed":70,"skipped":1338,"failed":0}
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:20:32.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Discovering how many secrets are in namespace by default
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Secret
STEP: Ensuring resource quota status captures secret creation
STEP: Deleting a secret
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:20:49.706: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7911" for this suite.

• [SLOW TEST:17.270 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a secret. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":71,"skipped":1338,"failed":0}
S
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:20:49.721: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Jan 21 00:20:56.593: INFO: 5 pods remaining
Jan 21 00:20:56.593: INFO: 0 pods has nil DeletionTimestamp
Jan 21 00:20:56.593: INFO: 
STEP: Gathering metrics
W0121 00:20:57.352343       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 21 00:20:57.352: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:20:57.353: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-2404" for this suite.

• [SLOW TEST:7.844 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":72,"skipped":1339,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:20:57.566: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 21 00:20:58.042: INFO: Waiting up to 5m0s for pod "downwardapi-volume-76693f82-c5a2-478d-ae42-9cb08087e4fc" in namespace "downward-api-862" to be "success or failure"
Jan 21 00:20:58.124: INFO: Pod "downwardapi-volume-76693f82-c5a2-478d-ae42-9cb08087e4fc": Phase="Pending", Reason="", readiness=false. Elapsed: 82.046918ms
Jan 21 00:21:00.140: INFO: Pod "downwardapi-volume-76693f82-c5a2-478d-ae42-9cb08087e4fc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.097866706s
Jan 21 00:21:03.088: INFO: Pod "downwardapi-volume-76693f82-c5a2-478d-ae42-9cb08087e4fc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.046823587s
Jan 21 00:21:06.268: INFO: Pod "downwardapi-volume-76693f82-c5a2-478d-ae42-9cb08087e4fc": Phase="Pending", Reason="", readiness=false. Elapsed: 8.22670434s
Jan 21 00:21:08.292: INFO: Pod "downwardapi-volume-76693f82-c5a2-478d-ae42-9cb08087e4fc": Phase="Pending", Reason="", readiness=false. Elapsed: 10.25077711s
Jan 21 00:21:10.498: INFO: Pod "downwardapi-volume-76693f82-c5a2-478d-ae42-9cb08087e4fc": Phase="Pending", Reason="", readiness=false. Elapsed: 12.456532453s
Jan 21 00:21:12.507: INFO: Pod "downwardapi-volume-76693f82-c5a2-478d-ae42-9cb08087e4fc": Phase="Pending", Reason="", readiness=false. Elapsed: 14.464845259s
Jan 21 00:21:14.520: INFO: Pod "downwardapi-volume-76693f82-c5a2-478d-ae42-9cb08087e4fc": Phase="Pending", Reason="", readiness=false. Elapsed: 16.477837812s
Jan 21 00:21:16.530: INFO: Pod "downwardapi-volume-76693f82-c5a2-478d-ae42-9cb08087e4fc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 18.48783313s
STEP: Saw pod success
Jan 21 00:21:16.530: INFO: Pod "downwardapi-volume-76693f82-c5a2-478d-ae42-9cb08087e4fc" satisfied condition "success or failure"
Jan 21 00:21:16.533: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-76693f82-c5a2-478d-ae42-9cb08087e4fc container client-container: 
STEP: delete the pod
Jan 21 00:21:16.647: INFO: Waiting for pod downwardapi-volume-76693f82-c5a2-478d-ae42-9cb08087e4fc to disappear
Jan 21 00:21:16.656: INFO: Pod downwardapi-volume-76693f82-c5a2-478d-ae42-9cb08087e4fc no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:21:16.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-862" for this suite.

• [SLOW TEST:19.100 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1386,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:21:16.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Jan 21 00:21:16.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Jan 21 00:21:29.209: INFO: >>> kubeConfig: /root/.kube/config
Jan 21 00:21:32.994: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:21:47.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2750" for this suite.

• [SLOW TEST:31.147 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":74,"skipped":1399,"failed":0}
S
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:21:47.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating replication controller my-hostname-basic-0cb1cade-00fa-4387-96bb-84a0344ce8b1
Jan 21 00:21:47.946: INFO: Pod name my-hostname-basic-0cb1cade-00fa-4387-96bb-84a0344ce8b1: Found 0 pods out of 1
Jan 21 00:21:52.955: INFO: Pod name my-hostname-basic-0cb1cade-00fa-4387-96bb-84a0344ce8b1: Found 1 pods out of 1
Jan 21 00:21:52.955: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-0cb1cade-00fa-4387-96bb-84a0344ce8b1" are running
Jan 21 00:21:54.970: INFO: Pod "my-hostname-basic-0cb1cade-00fa-4387-96bb-84a0344ce8b1-jmf4b" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-21 00:21:48 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-21 00:21:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-0cb1cade-00fa-4387-96bb-84a0344ce8b1]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-21 00:21:48 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-0cb1cade-00fa-4387-96bb-84a0344ce8b1]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-21 00:21:47 +0000 UTC Reason: Message:}])
Jan 21 00:21:54.970: INFO: Trying to dial the pod
Jan 21 00:21:59.998: INFO: Controller my-hostname-basic-0cb1cade-00fa-4387-96bb-84a0344ce8b1: Got expected result from replica 1 [my-hostname-basic-0cb1cade-00fa-4387-96bb-84a0344ce8b1-jmf4b]: "my-hostname-basic-0cb1cade-00fa-4387-96bb-84a0344ce8b1-jmf4b", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:21:59.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9910" for this suite.

• [SLOW TEST:12.200 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":75,"skipped":1400,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:22:00.016: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 21 00:22:00.118: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:22:16.070: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-9761" for this suite.

• [SLOW TEST:16.069 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":76,"skipped":1409,"failed":0}
SSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:22:16.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1409.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1409.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1409.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1409.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 21 00:22:28.283: INFO: DNS probes using dns-test-c7fa0a18-3cd5-4ac3-a8b8-becbe05286a9 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1409.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-1409.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1409.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-1409.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 21 00:22:42.553: INFO: File wheezy_udp@dns-test-service-3.dns-1409.svc.cluster.local from pod  dns-1409/dns-test-5c303c09-53cb-41af-a078-28bdbf69c033 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 21 00:22:42.559: INFO: File jessie_udp@dns-test-service-3.dns-1409.svc.cluster.local from pod  dns-1409/dns-test-5c303c09-53cb-41af-a078-28bdbf69c033 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 21 00:22:42.559: INFO: Lookups using dns-1409/dns-test-5c303c09-53cb-41af-a078-28bdbf69c033 failed for: [wheezy_udp@dns-test-service-3.dns-1409.svc.cluster.local jessie_udp@dns-test-service-3.dns-1409.svc.cluster.local]

Jan 21 00:22:47.571: INFO: File wheezy_udp@dns-test-service-3.dns-1409.svc.cluster.local from pod  dns-1409/dns-test-5c303c09-53cb-41af-a078-28bdbf69c033 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 21 00:22:47.584: INFO: File jessie_udp@dns-test-service-3.dns-1409.svc.cluster.local from pod  dns-1409/dns-test-5c303c09-53cb-41af-a078-28bdbf69c033 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 21 00:22:47.584: INFO: Lookups using dns-1409/dns-test-5c303c09-53cb-41af-a078-28bdbf69c033 failed for: [wheezy_udp@dns-test-service-3.dns-1409.svc.cluster.local jessie_udp@dns-test-service-3.dns-1409.svc.cluster.local]

Jan 21 00:22:52.579: INFO: File wheezy_udp@dns-test-service-3.dns-1409.svc.cluster.local from pod  dns-1409/dns-test-5c303c09-53cb-41af-a078-28bdbf69c033 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 21 00:22:52.587: INFO: File jessie_udp@dns-test-service-3.dns-1409.svc.cluster.local from pod  dns-1409/dns-test-5c303c09-53cb-41af-a078-28bdbf69c033 contains 'foo.example.com.
' instead of 'bar.example.com.'
Jan 21 00:22:52.587: INFO: Lookups using dns-1409/dns-test-5c303c09-53cb-41af-a078-28bdbf69c033 failed for: [wheezy_udp@dns-test-service-3.dns-1409.svc.cluster.local jessie_udp@dns-test-service-3.dns-1409.svc.cluster.local]

Jan 21 00:22:57.584: INFO: DNS probes using dns-test-5c303c09-53cb-41af-a078-28bdbf69c033 succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1409.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-1409.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-1409.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-1409.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 21 00:23:11.891: INFO: DNS probes using dns-test-7d440160-56ce-4ac3-8c53-aef015f9ca25 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:23:12.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-1409" for this suite.

• [SLOW TEST:56.157 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":77,"skipped":1419,"failed":0}
SSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:23:12.243: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-6b8d83b3-b4fd-448f-aff3-bf092f01002d
STEP: Creating a pod to test consume secrets
Jan 21 00:23:12.566: INFO: Waiting up to 5m0s for pod "pod-secrets-d5a46066-9955-42cb-bce8-a28d7f8106f5" in namespace "secrets-9297" to be "success or failure"
Jan 21 00:23:12.768: INFO: Pod "pod-secrets-d5a46066-9955-42cb-bce8-a28d7f8106f5": Phase="Pending", Reason="", readiness=false. Elapsed: 201.517683ms
Jan 21 00:23:14.779: INFO: Pod "pod-secrets-d5a46066-9955-42cb-bce8-a28d7f8106f5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.212859914s
Jan 21 00:23:16.785: INFO: Pod "pod-secrets-d5a46066-9955-42cb-bce8-a28d7f8106f5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.2187418s
Jan 21 00:23:18.844: INFO: Pod "pod-secrets-d5a46066-9955-42cb-bce8-a28d7f8106f5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.27811897s
Jan 21 00:23:20.881: INFO: Pod "pod-secrets-d5a46066-9955-42cb-bce8-a28d7f8106f5": Phase="Pending", Reason="", readiness=false. Elapsed: 8.314837218s
Jan 21 00:23:22.887: INFO: Pod "pod-secrets-d5a46066-9955-42cb-bce8-a28d7f8106f5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.321233269s
STEP: Saw pod success
Jan 21 00:23:22.887: INFO: Pod "pod-secrets-d5a46066-9955-42cb-bce8-a28d7f8106f5" satisfied condition "success or failure"
Jan 21 00:23:22.890: INFO: Trying to get logs from node jerma-node pod pod-secrets-d5a46066-9955-42cb-bce8-a28d7f8106f5 container secret-volume-test: 
STEP: delete the pod
Jan 21 00:23:23.093: INFO: Waiting for pod pod-secrets-d5a46066-9955-42cb-bce8-a28d7f8106f5 to disappear
Jan 21 00:23:23.105: INFO: Pod pod-secrets-d5a46066-9955-42cb-bce8-a28d7f8106f5 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:23:23.105: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-9297" for this suite.

• [SLOW TEST:10.876 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1424,"failed":0}
SSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:23:23.120: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with projected pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-projected-7tkm
STEP: Creating a pod to test atomic-volume-subpath
Jan 21 00:23:23.314: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7tkm" in namespace "subpath-7240" to be "success or failure"
Jan 21 00:23:23.323: INFO: Pod "pod-subpath-test-projected-7tkm": Phase="Pending", Reason="", readiness=false. Elapsed: 8.984606ms
Jan 21 00:23:25.333: INFO: Pod "pod-subpath-test-projected-7tkm": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019323748s
Jan 21 00:23:27.342: INFO: Pod "pod-subpath-test-projected-7tkm": Phase="Pending", Reason="", readiness=false. Elapsed: 4.027989695s
Jan 21 00:23:29.349: INFO: Pod "pod-subpath-test-projected-7tkm": Phase="Pending", Reason="", readiness=false. Elapsed: 6.034943543s
Jan 21 00:23:31.356: INFO: Pod "pod-subpath-test-projected-7tkm": Phase="Running", Reason="", readiness=true. Elapsed: 8.04254618s
Jan 21 00:23:33.367: INFO: Pod "pod-subpath-test-projected-7tkm": Phase="Running", Reason="", readiness=true. Elapsed: 10.053056054s
Jan 21 00:23:35.374: INFO: Pod "pod-subpath-test-projected-7tkm": Phase="Running", Reason="", readiness=true. Elapsed: 12.060042366s
Jan 21 00:23:37.383: INFO: Pod "pod-subpath-test-projected-7tkm": Phase="Running", Reason="", readiness=true. Elapsed: 14.06913056s
Jan 21 00:23:39.392: INFO: Pod "pod-subpath-test-projected-7tkm": Phase="Running", Reason="", readiness=true. Elapsed: 16.078417076s
Jan 21 00:23:41.400: INFO: Pod "pod-subpath-test-projected-7tkm": Phase="Running", Reason="", readiness=true. Elapsed: 18.086703704s
Jan 21 00:23:43.408: INFO: Pod "pod-subpath-test-projected-7tkm": Phase="Running", Reason="", readiness=true. Elapsed: 20.094626084s
Jan 21 00:23:45.418: INFO: Pod "pod-subpath-test-projected-7tkm": Phase="Running", Reason="", readiness=true. Elapsed: 22.10437375s
Jan 21 00:23:47.427: INFO: Pod "pod-subpath-test-projected-7tkm": Phase="Running", Reason="", readiness=true. Elapsed: 24.113457116s
Jan 21 00:23:49.432: INFO: Pod "pod-subpath-test-projected-7tkm": Phase="Running", Reason="", readiness=true. Elapsed: 26.118167165s
Jan 21 00:23:51.441: INFO: Pod "pod-subpath-test-projected-7tkm": Phase="Running", Reason="", readiness=true. Elapsed: 28.126904796s
Jan 21 00:23:53.450: INFO: Pod "pod-subpath-test-projected-7tkm": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.136046008s
STEP: Saw pod success
Jan 21 00:23:53.450: INFO: Pod "pod-subpath-test-projected-7tkm" satisfied condition "success or failure"
Jan 21 00:23:53.455: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-7tkm container test-container-subpath-projected-7tkm: 
STEP: delete the pod
Jan 21 00:23:53.690: INFO: Waiting for pod pod-subpath-test-projected-7tkm to disappear
Jan 21 00:23:53.703: INFO: Pod pod-subpath-test-projected-7tkm no longer exists
STEP: Deleting pod pod-subpath-test-projected-7tkm
Jan 21 00:23:53.703: INFO: Deleting pod "pod-subpath-test-projected-7tkm" in namespace "subpath-7240"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:23:53.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7240" for this suite.

• [SLOW TEST:30.618 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with projected pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":79,"skipped":1430,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:23:53.739: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:24:30.340: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-6586" for this suite.
STEP: Destroying namespace "nsdeletetest-1161" for this suite.
Jan 21 00:24:30.364: INFO: Namespace nsdeletetest-1161 was already deleted
STEP: Destroying namespace "nsdeletetest-8824" for this suite.

• [SLOW TEST:36.631 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":80,"skipped":1436,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:24:30.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 21 00:24:30.532: INFO: Waiting up to 5m0s for pod "downward-api-638a2533-49e6-43d3-999f-4d4a6f6354ec" in namespace "downward-api-6034" to be "success or failure"
Jan 21 00:24:30.541: INFO: Pod "downward-api-638a2533-49e6-43d3-999f-4d4a6f6354ec": Phase="Pending", Reason="", readiness=false. Elapsed: 8.669827ms
Jan 21 00:24:32.550: INFO: Pod "downward-api-638a2533-49e6-43d3-999f-4d4a6f6354ec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017596715s
Jan 21 00:24:34.562: INFO: Pod "downward-api-638a2533-49e6-43d3-999f-4d4a6f6354ec": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029836807s
Jan 21 00:24:36.578: INFO: Pod "downward-api-638a2533-49e6-43d3-999f-4d4a6f6354ec": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046065973s
Jan 21 00:24:38.589: INFO: Pod "downward-api-638a2533-49e6-43d3-999f-4d4a6f6354ec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.05702532s
STEP: Saw pod success
Jan 21 00:24:38.589: INFO: Pod "downward-api-638a2533-49e6-43d3-999f-4d4a6f6354ec" satisfied condition "success or failure"
Jan 21 00:24:38.600: INFO: Trying to get logs from node jerma-node pod downward-api-638a2533-49e6-43d3-999f-4d4a6f6354ec container dapi-container: 
STEP: delete the pod
Jan 21 00:24:38.655: INFO: Waiting for pod downward-api-638a2533-49e6-43d3-999f-4d4a6f6354ec to disappear
Jan 21 00:24:38.668: INFO: Pod downward-api-638a2533-49e6-43d3-999f-4d4a6f6354ec no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:24:38.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6034" for this suite.

• [SLOW TEST:8.312 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1469,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:24:38.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-map-1015ed55-5d6d-4c5c-aedb-faf66a241c47
STEP: Creating a pod to test consume secrets
Jan 21 00:24:39.037: INFO: Waiting up to 5m0s for pod "pod-secrets-6e993864-8098-44c7-af15-79c88fc80b18" in namespace "secrets-8897" to be "success or failure"
Jan 21 00:24:39.056: INFO: Pod "pod-secrets-6e993864-8098-44c7-af15-79c88fc80b18": Phase="Pending", Reason="", readiness=false. Elapsed: 18.040469ms
Jan 21 00:24:41.111: INFO: Pod "pod-secrets-6e993864-8098-44c7-af15-79c88fc80b18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073096145s
Jan 21 00:24:43.120: INFO: Pod "pod-secrets-6e993864-8098-44c7-af15-79c88fc80b18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082032707s
Jan 21 00:24:45.126: INFO: Pod "pod-secrets-6e993864-8098-44c7-af15-79c88fc80b18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088673502s
Jan 21 00:24:47.162: INFO: Pod "pod-secrets-6e993864-8098-44c7-af15-79c88fc80b18": Phase="Pending", Reason="", readiness=false. Elapsed: 8.12445315s
Jan 21 00:24:49.169: INFO: Pod "pod-secrets-6e993864-8098-44c7-af15-79c88fc80b18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.131838719s
STEP: Saw pod success
Jan 21 00:24:49.170: INFO: Pod "pod-secrets-6e993864-8098-44c7-af15-79c88fc80b18" satisfied condition "success or failure"
Jan 21 00:24:49.175: INFO: Trying to get logs from node jerma-node pod pod-secrets-6e993864-8098-44c7-af15-79c88fc80b18 container secret-volume-test: 
STEP: delete the pod
Jan 21 00:24:49.276: INFO: Waiting for pod pod-secrets-6e993864-8098-44c7-af15-79c88fc80b18 to disappear
Jan 21 00:24:49.284: INFO: Pod pod-secrets-6e993864-8098-44c7-af15-79c88fc80b18 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:24:49.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-8897" for this suite.

• [SLOW TEST:10.643 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1476,"failed":0}
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:24:49.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9400
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-9400
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-9400
Jan 21 00:24:49.521: INFO: Found 0 stateful pods, waiting for 1
Jan 21 00:24:59.531: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Jan 21 00:24:59.536: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9400 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 21 00:25:01.982: INFO: stderr: "I0121 00:25:01.742845    1969 log.go:172] (0xc000bdeb00) (0xc0005f08c0) Create stream\nI0121 00:25:01.743106    1969 log.go:172] (0xc000bdeb00) (0xc0005f08c0) Stream added, broadcasting: 1\nI0121 00:25:01.751351    1969 log.go:172] (0xc000bdeb00) Reply frame received for 1\nI0121 00:25:01.751649    1969 log.go:172] (0xc000bdeb00) (0xc0007375e0) Create stream\nI0121 00:25:01.751674    1969 log.go:172] (0xc000bdeb00) (0xc0007375e0) Stream added, broadcasting: 3\nI0121 00:25:01.755345    1969 log.go:172] (0xc000bdeb00) Reply frame received for 3\nI0121 00:25:01.755455    1969 log.go:172] (0xc000bdeb00) (0xc0005e8000) Create stream\nI0121 00:25:01.755489    1969 log.go:172] (0xc000bdeb00) (0xc0005e8000) Stream added, broadcasting: 5\nI0121 00:25:01.757160    1969 log.go:172] (0xc000bdeb00) Reply frame received for 5\nI0121 00:25:01.859631    1969 log.go:172] (0xc000bdeb00) Data frame received for 5\nI0121 00:25:01.859934    1969 log.go:172] (0xc0005e8000) (5) Data frame handling\nI0121 00:25:01.860039    1969 log.go:172] (0xc0005e8000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0121 00:25:01.888372    1969 log.go:172] (0xc000bdeb00) Data frame received for 3\nI0121 00:25:01.888454    1969 log.go:172] (0xc0007375e0) (3) Data frame handling\nI0121 00:25:01.888476    1969 log.go:172] (0xc0007375e0) (3) Data frame sent\nI0121 00:25:01.965375    1969 log.go:172] (0xc000bdeb00) Data frame received for 1\nI0121 00:25:01.965575    1969 log.go:172] (0xc000bdeb00) (0xc0005e8000) Stream removed, broadcasting: 5\nI0121 00:25:01.965851    1969 log.go:172] (0xc000bdeb00) (0xc0007375e0) Stream removed, broadcasting: 3\nI0121 00:25:01.966027    1969 log.go:172] (0xc0005f08c0) (1) Data frame handling\nI0121 00:25:01.966072    1969 log.go:172] (0xc0005f08c0) (1) Data frame sent\nI0121 00:25:01.966098    1969 log.go:172] (0xc000bdeb00) (0xc0005f08c0) Stream removed, broadcasting: 1\nI0121 00:25:01.966143    1969 log.go:172] (0xc000bdeb00) Go away received\nI0121 00:25:01.967742    1969 log.go:172] (0xc000bdeb00) (0xc0005f08c0) Stream removed, broadcasting: 1\nI0121 00:25:01.967763    1969 log.go:172] (0xc000bdeb00) (0xc0007375e0) Stream removed, broadcasting: 3\nI0121 00:25:01.967771    1969 log.go:172] (0xc000bdeb00) (0xc0005e8000) Stream removed, broadcasting: 5\n"
Jan 21 00:25:01.983: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 21 00:25:01.983: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 21 00:25:01.998: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Jan 21 00:25:12.016: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 21 00:25:12.016: INFO: Waiting for statefulset status.replicas updated to 0
Jan 21 00:25:12.109: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999996311s
Jan 21 00:25:13.118: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.926553313s
Jan 21 00:25:14.153: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.916918389s
Jan 21 00:25:15.179: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.881890198s
Jan 21 00:25:16.188: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.856028779s
Jan 21 00:25:17.196: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.847963003s
Jan 21 00:25:18.204: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.839321079s
Jan 21 00:25:19.212: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.831479003s
Jan 21 00:25:20.274: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.823648801s
Jan 21 00:25:21.282: INFO: Verifying statefulset ss doesn't scale past 1 for another 761.335794ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-9400
Jan 21 00:25:22.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9400 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 21 00:25:22.719: INFO: stderr: "I0121 00:25:22.526144    2006 log.go:172] (0xc000107b80) (0xc000b2c140) Create stream\nI0121 00:25:22.526841    2006 log.go:172] (0xc000107b80) (0xc000b2c140) Stream added, broadcasting: 1\nI0121 00:25:22.540538    2006 log.go:172] (0xc000107b80) Reply frame received for 1\nI0121 00:25:22.540719    2006 log.go:172] (0xc000107b80) (0xc0008b6000) Create stream\nI0121 00:25:22.540735    2006 log.go:172] (0xc000107b80) (0xc0008b6000) Stream added, broadcasting: 3\nI0121 00:25:22.544354    2006 log.go:172] (0xc000107b80) Reply frame received for 3\nI0121 00:25:22.544592    2006 log.go:172] (0xc000107b80) (0xc000aa03c0) Create stream\nI0121 00:25:22.544621    2006 log.go:172] (0xc000107b80) (0xc000aa03c0) Stream added, broadcasting: 5\nI0121 00:25:22.546277    2006 log.go:172] (0xc000107b80) Reply frame received for 5\nI0121 00:25:22.629647    2006 log.go:172] (0xc000107b80) Data frame received for 5\nI0121 00:25:22.629762    2006 log.go:172] (0xc000aa03c0) (5) Data frame handling\nI0121 00:25:22.629793    2006 log.go:172] (0xc000aa03c0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0121 00:25:22.630351    2006 log.go:172] (0xc000107b80) Data frame received for 3\nI0121 00:25:22.630362    2006 log.go:172] (0xc0008b6000) (3) Data frame handling\nI0121 00:25:22.630381    2006 log.go:172] (0xc0008b6000) (3) Data frame sent\nI0121 00:25:22.709382    2006 log.go:172] (0xc000107b80) Data frame received for 1\nI0121 00:25:22.709538    2006 log.go:172] (0xc000107b80) (0xc0008b6000) Stream removed, broadcasting: 3\nI0121 00:25:22.709649    2006 log.go:172] (0xc000b2c140) (1) Data frame handling\nI0121 00:25:22.709763    2006 log.go:172] (0xc000b2c140) (1) Data frame sent\nI0121 00:25:22.710105    2006 log.go:172] (0xc000107b80) (0xc000aa03c0) Stream removed, broadcasting: 5\nI0121 00:25:22.710304    2006 log.go:172] (0xc000107b80) (0xc000b2c140) Stream removed, broadcasting: 1\nI0121 00:25:22.710363    2006 log.go:172] (0xc000107b80) Go away received\nI0121 00:25:22.711337    2006 log.go:172] (0xc000107b80) (0xc000b2c140) Stream removed, broadcasting: 1\nI0121 00:25:22.711371    2006 log.go:172] (0xc000107b80) (0xc0008b6000) Stream removed, broadcasting: 3\nI0121 00:25:22.711378    2006 log.go:172] (0xc000107b80) (0xc000aa03c0) Stream removed, broadcasting: 5\n"
Jan 21 00:25:22.719: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 21 00:25:22.719: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 21 00:25:22.725: INFO: Found 1 stateful pods, waiting for 3
Jan 21 00:25:32.732: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 21 00:25:32.732: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 21 00:25:32.732: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Jan 21 00:25:42.768: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 21 00:25:42.768: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 21 00:25:42.768: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Jan 21 00:25:42.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9400 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 21 00:25:43.163: INFO: stderr: "I0121 00:25:43.011022    2024 log.go:172] (0xc00095e000) (0xc000970000) Create stream\nI0121 00:25:43.011231    2024 log.go:172] (0xc00095e000) (0xc000970000) Stream added, broadcasting: 1\nI0121 00:25:43.014836    2024 log.go:172] (0xc00095e000) Reply frame received for 1\nI0121 00:25:43.014874    2024 log.go:172] (0xc00095e000) (0xc0008e2000) Create stream\nI0121 00:25:43.014881    2024 log.go:172] (0xc00095e000) (0xc0008e2000) Stream added, broadcasting: 3\nI0121 00:25:43.015818    2024 log.go:172] (0xc00095e000) Reply frame received for 3\nI0121 00:25:43.015842    2024 log.go:172] (0xc00095e000) (0xc0008e20a0) Create stream\nI0121 00:25:43.015848    2024 log.go:172] (0xc00095e000) (0xc0008e20a0) Stream added, broadcasting: 5\nI0121 00:25:43.017111    2024 log.go:172] (0xc00095e000) Reply frame received for 5\nI0121 00:25:43.082475    2024 log.go:172] (0xc00095e000) Data frame received for 3\nI0121 00:25:43.082512    2024 log.go:172] (0xc0008e2000) (3) Data frame handling\nI0121 00:25:43.082530    2024 log.go:172] (0xc00095e000) Data frame received for 5\nI0121 00:25:43.082571    2024 log.go:172] (0xc0008e20a0) (5) Data frame handling\nI0121 00:25:43.082599    2024 log.go:172] (0xc0008e20a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0121 00:25:43.082642    2024 log.go:172] (0xc0008e2000) (3) Data frame sent\nI0121 00:25:43.152533    2024 log.go:172] (0xc00095e000) Data frame received for 1\nI0121 00:25:43.152606    2024 log.go:172] (0xc000970000) (1) Data frame handling\nI0121 00:25:43.152644    2024 log.go:172] (0xc000970000) (1) Data frame sent\nI0121 00:25:43.153340    2024 log.go:172] (0xc00095e000) (0xc000970000) Stream removed, broadcasting: 1\nI0121 00:25:43.153481    2024 log.go:172] (0xc00095e000) (0xc0008e20a0) Stream removed, broadcasting: 5\nI0121 00:25:43.153556    2024 log.go:172] (0xc00095e000) (0xc0008e2000) Stream removed, broadcasting: 3\nI0121 00:25:43.153606    2024 log.go:172] (0xc00095e000) Go away received\nI0121 00:25:43.154711    2024 log.go:172] (0xc00095e000) (0xc000970000) Stream removed, broadcasting: 1\nI0121 00:25:43.154729    2024 log.go:172] (0xc00095e000) (0xc0008e2000) Stream removed, broadcasting: 3\nI0121 00:25:43.154742    2024 log.go:172] (0xc00095e000) (0xc0008e20a0) Stream removed, broadcasting: 5\n"
Jan 21 00:25:43.163: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 21 00:25:43.163: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 21 00:25:43.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9400 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 21 00:25:43.537: INFO: stderr: "I0121 00:25:43.311190    2046 log.go:172] (0xc000931130) (0xc000a8c460) Create stream\nI0121 00:25:43.311480    2046 log.go:172] (0xc000931130) (0xc000a8c460) Stream added, broadcasting: 1\nI0121 00:25:43.321849    2046 log.go:172] (0xc000931130) Reply frame received for 1\nI0121 00:25:43.321922    2046 log.go:172] (0xc000931130) (0xc00064db80) Create stream\nI0121 00:25:43.321936    2046 log.go:172] (0xc000931130) (0xc00064db80) Stream added, broadcasting: 3\nI0121 00:25:43.323473    2046 log.go:172] (0xc000931130) Reply frame received for 3\nI0121 00:25:43.323505    2046 log.go:172] (0xc000931130) (0xc000576780) Create stream\nI0121 00:25:43.323512    2046 log.go:172] (0xc000931130) (0xc000576780) Stream added, broadcasting: 5\nI0121 00:25:43.325025    2046 log.go:172] (0xc000931130) Reply frame received for 5\nI0121 00:25:43.398392    2046 log.go:172] (0xc000931130) Data frame received for 5\nI0121 00:25:43.398470    2046 log.go:172] (0xc000576780) (5) Data frame handling\nI0121 00:25:43.398499    2046 log.go:172] (0xc000576780) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0121 00:25:43.452572    2046 log.go:172] (0xc000931130) Data frame received for 3\nI0121 00:25:43.452640    2046 log.go:172] (0xc00064db80) (3) Data frame handling\nI0121 00:25:43.452671    2046 log.go:172] (0xc00064db80) (3) Data frame sent\nI0121 00:25:43.526314    2046 log.go:172] (0xc000931130) Data frame received for 1\nI0121 00:25:43.526356    2046 log.go:172] (0xc000a8c460) (1) Data frame handling\nI0121 00:25:43.526386    2046 log.go:172] (0xc000a8c460) (1) Data frame sent\nI0121 00:25:43.526490    2046 log.go:172] (0xc000931130) (0xc000a8c460) Stream removed, broadcasting: 1\nI0121 00:25:43.526607    2046 log.go:172] (0xc000931130) (0xc00064db80) Stream removed, broadcasting: 3\nI0121 00:25:43.526675    2046 log.go:172] (0xc000931130) (0xc000576780) Stream removed, broadcasting: 5\nI0121 00:25:43.526737    2046 log.go:172] (0xc000931130) Go away received\nI0121 00:25:43.528098    2046 log.go:172] (0xc000931130) (0xc000a8c460) Stream removed, broadcasting: 1\nI0121 00:25:43.528132    2046 log.go:172] (0xc000931130) (0xc00064db80) Stream removed, broadcasting: 3\nI0121 00:25:43.528143    2046 log.go:172] (0xc000931130) (0xc000576780) Stream removed, broadcasting: 5\n"
Jan 21 00:25:43.538: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 21 00:25:43.538: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 21 00:25:43.538: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9400 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Jan 21 00:25:44.120: INFO: stderr: "I0121 00:25:43.752141    2066 log.go:172] (0xc0009b49a0) (0xc000a2e320) Create stream\nI0121 00:25:43.752674    2066 log.go:172] (0xc0009b49a0) (0xc000a2e320) Stream added, broadcasting: 1\nI0121 00:25:43.762239    2066 log.go:172] (0xc0009b49a0) Reply frame received for 1\nI0121 00:25:43.762388    2066 log.go:172] (0xc0009b49a0) (0xc000a460a0) Create stream\nI0121 00:25:43.762417    2066 log.go:172] (0xc0009b49a0) (0xc000a460a0) Stream added, broadcasting: 3\nI0121 00:25:43.767872    2066 log.go:172] (0xc0009b49a0) Reply frame received for 3\nI0121 00:25:43.767947    2066 log.go:172] (0xc0009b49a0) (0xc000a2e3c0) Create stream\nI0121 00:25:43.767959    2066 log.go:172] (0xc0009b49a0) (0xc000a2e3c0) Stream added, broadcasting: 5\nI0121 00:25:43.772088    2066 log.go:172] (0xc0009b49a0) Reply frame received for 5\nI0121 00:25:43.971177    2066 log.go:172] (0xc0009b49a0) Data frame received for 5\nI0121 00:25:43.971353    2066 log.go:172] (0xc000a2e3c0) (5) Data frame handling\nI0121 00:25:43.971406    2066 log.go:172] (0xc000a2e3c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0121 00:25:44.003832    2066 log.go:172] (0xc0009b49a0) Data frame received for 3\nI0121 00:25:44.004297    2066 log.go:172] (0xc000a460a0) (3) Data frame handling\nI0121 00:25:44.004384    2066 log.go:172] (0xc000a460a0) (3) Data frame sent\nI0121 00:25:44.104737    2066 log.go:172] (0xc0009b49a0) (0xc000a460a0) Stream removed, broadcasting: 3\nI0121 00:25:44.105202    2066 log.go:172] (0xc0009b49a0) Data frame received for 1\nI0121 00:25:44.105217    2066 log.go:172] (0xc000a2e320) (1) Data frame handling\nI0121 00:25:44.105236    2066 log.go:172] (0xc000a2e320) (1) Data frame sent\nI0121 00:25:44.105246    2066 log.go:172] (0xc0009b49a0) (0xc000a2e320) Stream removed, broadcasting: 1\nI0121 00:25:44.106000    2066 log.go:172] (0xc0009b49a0) (0xc000a2e3c0) Stream removed, broadcasting: 5\nI0121 00:25:44.106057    2066 log.go:172] (0xc0009b49a0) (0xc000a2e320) Stream removed, broadcasting: 1\nI0121 00:25:44.106067    2066 log.go:172] (0xc0009b49a0) (0xc000a460a0) Stream removed, broadcasting: 3\nI0121 00:25:44.106073    2066 log.go:172] (0xc0009b49a0) (0xc000a2e3c0) Stream removed, broadcasting: 5\nI0121 00:25:44.106373    2066 log.go:172] (0xc0009b49a0) Go away received\n"
Jan 21 00:25:44.120: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Jan 21 00:25:44.121: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Jan 21 00:25:44.121: INFO: Waiting for statefulset status.replicas updated to 0
Jan 21 00:25:44.150: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Jan 21 00:25:54.161: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Jan 21 00:25:54.161: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Jan 21 00:25:54.161: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Jan 21 00:25:54.176: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999788s
Jan 21 00:25:55.210: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.990611892s
Jan 21 00:25:56.218: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.956208055s
Jan 21 00:25:57.236: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.948262448s
Jan 21 00:25:58.244: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.930725082s
Jan 21 00:25:59.964: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.92250724s
Jan 21 00:26:00.973: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.202863211s
Jan 21 00:26:01.983: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.193468729s
Jan 21 00:26:03.007: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.183783884s
Jan 21 00:26:04.019: INFO: Verifying statefulset ss doesn't scale past 3 for another 159.086863ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-9400
Jan 21 00:26:05.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9400 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 21 00:26:05.444: INFO: stderr: "I0121 00:26:05.264733    2084 log.go:172] (0xc000ba3080) (0xc000bd2640) Create stream\nI0121 00:26:05.264962    2084 log.go:172] (0xc000ba3080) (0xc000bd2640) Stream added, broadcasting: 1\nI0121 00:26:05.273179    2084 log.go:172] (0xc000ba3080) Reply frame received for 1\nI0121 00:26:05.273258    2084 log.go:172] (0xc000ba3080) (0xc000bf8140) Create stream\nI0121 00:26:05.273279    2084 log.go:172] (0xc000ba3080) (0xc000bf8140) Stream added, broadcasting: 3\nI0121 00:26:05.275645    2084 log.go:172] (0xc000ba3080) Reply frame received for 3\nI0121 00:26:05.275706    2084 log.go:172] (0xc000ba3080) (0xc000bf81e0) Create stream\nI0121 00:26:05.275714    2084 log.go:172] (0xc000ba3080) (0xc000bf81e0) Stream added, broadcasting: 5\nI0121 00:26:05.278369    2084 log.go:172] (0xc000ba3080) Reply frame received for 5\nI0121 00:26:05.373923    2084 log.go:172] (0xc000ba3080) Data frame received for 3\nI0121 00:26:05.374347    2084 log.go:172] (0xc000bf8140) (3) Data frame handling\nI0121 00:26:05.374403    2084 log.go:172] (0xc000bf8140) (3) Data frame sent\nI0121 00:26:05.374607    2084 log.go:172] (0xc000ba3080) Data frame received for 5\nI0121 00:26:05.374654    2084 log.go:172] (0xc000bf81e0) (5) Data frame handling\nI0121 00:26:05.374703    2084 log.go:172] (0xc000bf81e0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0121 00:26:05.432716    2084 log.go:172] (0xc000ba3080) (0xc000bf8140) Stream removed, broadcasting: 3\nI0121 00:26:05.433127    2084 log.go:172] (0xc000ba3080) Data frame received for 1\nI0121 00:26:05.433199    2084 log.go:172] (0xc000ba3080) (0xc000bf81e0) Stream removed, broadcasting: 5\nI0121 00:26:05.433383    2084 log.go:172] (0xc000bd2640) (1) Data frame handling\nI0121 00:26:05.433580    2084 log.go:172] (0xc000bd2640) (1) Data frame sent\nI0121 00:26:05.433648    2084 log.go:172] (0xc000ba3080) (0xc000bd2640) Stream removed, broadcasting: 1\nI0121 00:26:05.433779    2084 log.go:172] (0xc000ba3080) Go away received\nI0121 00:26:05.435758    2084 log.go:172] (0xc000ba3080) (0xc000bd2640) Stream removed, broadcasting: 1\nI0121 00:26:05.435789    2084 log.go:172] (0xc000ba3080) (0xc000bf8140) Stream removed, broadcasting: 3\nI0121 00:26:05.435801    2084 log.go:172] (0xc000ba3080) (0xc000bf81e0) Stream removed, broadcasting: 5\n"
Jan 21 00:26:05.444: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 21 00:26:05.444: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 21 00:26:05.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9400 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 21 00:26:05.788: INFO: stderr: "I0121 00:26:05.621330    2103 log.go:172] (0xc000acd810) (0xc000a4c500) Create stream\nI0121 00:26:05.621616    2103 log.go:172] (0xc000acd810) (0xc000a4c500) Stream added, broadcasting: 1\nI0121 00:26:05.624447    2103 log.go:172] (0xc000acd810) Reply frame received for 1\nI0121 00:26:05.624470    2103 log.go:172] (0xc000acd810) (0xc000a28280) Create stream\nI0121 00:26:05.624476    2103 log.go:172] (0xc000acd810) (0xc000a28280) Stream added, broadcasting: 3\nI0121 00:26:05.625495    2103 log.go:172] (0xc000acd810) Reply frame received for 3\nI0121 00:26:05.625513    2103 log.go:172] (0xc000acd810) (0xc000a4c5a0) Create stream\nI0121 00:26:05.625519    2103 log.go:172] (0xc000acd810) (0xc000a4c5a0) Stream added, broadcasting: 5\nI0121 00:26:05.626671    2103 log.go:172] (0xc000acd810) Reply frame received for 5\nI0121 00:26:05.695480    2103 log.go:172] (0xc000acd810) Data frame received for 5\nI0121 00:26:05.695564    2103 log.go:172] (0xc000a4c5a0) (5) Data frame handling\nI0121 00:26:05.695580    2103 log.go:172] (0xc000a4c5a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0121 00:26:05.695615    2103 log.go:172] (0xc000acd810) Data frame received for 3\nI0121 00:26:05.695620    2103 log.go:172] (0xc000a28280) (3) Data frame handling\nI0121 00:26:05.695625    2103 log.go:172] (0xc000a28280) (3) Data frame sent\nI0121 00:26:05.764482    2103 log.go:172] (0xc000acd810) (0xc000a4c5a0) Stream removed, broadcasting: 5\nI0121 00:26:05.764748    2103 log.go:172] (0xc000acd810) Data frame received for 1\nI0121 00:26:05.766807    2103 log.go:172] (0xc000acd810) (0xc000a28280) Stream removed, broadcasting: 3\nI0121 00:26:05.767857    2103 log.go:172] (0xc000a4c500) (1) Data frame handling\nI0121 00:26:05.768706    2103 log.go:172] (0xc000a4c500) (1) Data frame sent\nI0121 00:26:05.769105    2103 log.go:172] (0xc000acd810) (0xc000a4c500) Stream removed, broadcasting: 1\nI0121 00:26:05.769405    2103 log.go:172] (0xc000acd810) Go away received\nI0121 00:26:05.774940    2103 log.go:172] (0xc000acd810) (0xc000a4c500) Stream removed, broadcasting: 1\nI0121 00:26:05.775048    2103 log.go:172] (0xc000acd810) (0xc000a28280) Stream removed, broadcasting: 3\nI0121 00:26:05.775071    2103 log.go:172] (0xc000acd810) (0xc000a4c5a0) Stream removed, broadcasting: 5\n"
Jan 21 00:26:05.789: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 21 00:26:05.789: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 21 00:26:05.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9400 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Jan 21 00:26:06.193: INFO: stderr: "I0121 00:26:06.023800    2123 log.go:172] (0xc000a678c0) (0xc0009f0320) Create stream\nI0121 00:26:06.024000    2123 log.go:172] (0xc000a678c0) (0xc0009f0320) Stream added, broadcasting: 1\nI0121 00:26:06.034569    2123 log.go:172] (0xc000a678c0) Reply frame received for 1\nI0121 00:26:06.034659    2123 log.go:172] (0xc000a678c0) (0xc0004c5a40) Create stream\nI0121 00:26:06.034669    2123 log.go:172] (0xc000a678c0) (0xc0004c5a40) Stream added, broadcasting: 3\nI0121 00:26:06.035786    2123 log.go:172] (0xc000a678c0) Reply frame received for 3\nI0121 00:26:06.035858    2123 log.go:172] (0xc000a678c0) (0xc000b04000) Create stream\nI0121 00:26:06.035878    2123 log.go:172] (0xc000a678c0) (0xc000b04000) Stream added, broadcasting: 5\nI0121 00:26:06.037776    2123 log.go:172] (0xc000a678c0) Reply frame received for 5\nI0121 00:26:06.110772    2123 log.go:172] (0xc000a678c0) Data frame received for 3\nI0121 00:26:06.110836    2123 log.go:172] (0xc0004c5a40) (3) Data frame handling\nI0121 00:26:06.110855    2123 log.go:172] (0xc0004c5a40) (3) Data frame sent\nI0121 00:26:06.110921    2123 log.go:172] (0xc000a678c0) Data frame received for 5\nI0121 00:26:06.110940    2123 log.go:172] (0xc000b04000) (5) Data frame handling\nI0121 00:26:06.110959    2123 log.go:172] (0xc000b04000) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0121 00:26:06.182979    2123 log.go:172] (0xc000a678c0) (0xc000b04000) Stream removed, broadcasting: 5\nI0121 00:26:06.183120    2123 log.go:172] (0xc000a678c0) Data frame received for 1\nI0121 00:26:06.183165    2123 log.go:172] (0xc000a678c0) (0xc0004c5a40) Stream removed, broadcasting: 3\nI0121 00:26:06.183209    2123 log.go:172] (0xc0009f0320) (1) Data frame handling\nI0121 00:26:06.183234    2123 log.go:172] (0xc0009f0320) (1) Data frame sent\nI0121 00:26:06.183250    2123 log.go:172] (0xc000a678c0) (0xc0009f0320) Stream removed, broadcasting: 1\nI0121 00:26:06.183287    2123 log.go:172] (0xc000a678c0) Go away received\nI0121 00:26:06.184332    2123 log.go:172] (0xc000a678c0) (0xc0009f0320) Stream removed, broadcasting: 1\nI0121 00:26:06.184346    2123 log.go:172] (0xc000a678c0) (0xc0004c5a40) Stream removed, broadcasting: 3\nI0121 00:26:06.184353    2123 log.go:172] (0xc000a678c0) (0xc000b04000) Stream removed, broadcasting: 5\n"
Jan 21 00:26:06.193: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Jan 21 00:26:06.193: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Jan 21 00:26:06.193: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 21 00:26:46.339: INFO: Deleting all statefulset in ns statefulset-9400
Jan 21 00:26:46.346: INFO: Scaling statefulset ss to 0
Jan 21 00:26:46.400: INFO: Waiting for statefulset status.replicas updated to 0
Jan 21 00:26:46.411: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:26:46.434: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9400" for this suite.

• [SLOW TEST:117.144 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":83,"skipped":1486,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:26:46.478: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 21 00:27:02.687: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 21 00:27:02.700: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 21 00:27:04.701: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 21 00:27:04.711: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 21 00:27:06.701: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 21 00:27:06.759: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 21 00:27:08.701: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 21 00:27:08.707: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 21 00:27:10.701: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 21 00:27:10.711: INFO: Pod pod-with-prestop-exec-hook still exists
Jan 21 00:27:12.701: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Jan 21 00:27:12.709: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:27:12.734: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-2391" for this suite.

• [SLOW TEST:26.269 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1495,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:27:12.747: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-7128, will wait for the garbage collector to delete the pods
Jan 21 00:27:26.961: INFO: Deleting Job.batch foo took: 10.558765ms
Jan 21 00:27:27.262: INFO: Terminating Job.batch foo pods took: 301.15173ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:28:12.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7128" for this suite.

• [SLOW TEST:59.643 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":85,"skipped":1509,"failed":0}
SSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:28:12.391: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:28:12.439: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:28:20.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-7514" for this suite.

• [SLOW TEST:8.147 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1512,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:28:20.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:28:20.727: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Jan 21 00:28:20.743: INFO: Number of nodes with available pods: 0
Jan 21 00:28:20.743: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Jan 21 00:28:20.825: INFO: Number of nodes with available pods: 0
Jan 21 00:28:20.825: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:21.839: INFO: Number of nodes with available pods: 0
Jan 21 00:28:21.839: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:22.832: INFO: Number of nodes with available pods: 0
Jan 21 00:28:22.832: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:23.839: INFO: Number of nodes with available pods: 0
Jan 21 00:28:23.839: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:24.870: INFO: Number of nodes with available pods: 0
Jan 21 00:28:24.871: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:26.262: INFO: Number of nodes with available pods: 0
Jan 21 00:28:26.262: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:26.833: INFO: Number of nodes with available pods: 0
Jan 21 00:28:26.833: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:27.832: INFO: Number of nodes with available pods: 0
Jan 21 00:28:27.832: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:28.832: INFO: Number of nodes with available pods: 1
Jan 21 00:28:28.832: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Jan 21 00:28:28.892: INFO: Number of nodes with available pods: 1
Jan 21 00:28:28.892: INFO: Number of running nodes: 0, number of available pods: 1
Jan 21 00:28:29.902: INFO: Number of nodes with available pods: 0
Jan 21 00:28:29.902: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Jan 21 00:28:30.036: INFO: Number of nodes with available pods: 0
Jan 21 00:28:30.037: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:31.298: INFO: Number of nodes with available pods: 0
Jan 21 00:28:31.298: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:32.045: INFO: Number of nodes with available pods: 0
Jan 21 00:28:32.046: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:33.046: INFO: Number of nodes with available pods: 0
Jan 21 00:28:33.046: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:34.044: INFO: Number of nodes with available pods: 0
Jan 21 00:28:34.044: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:35.044: INFO: Number of nodes with available pods: 0
Jan 21 00:28:35.045: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:36.042: INFO: Number of nodes with available pods: 0
Jan 21 00:28:36.042: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:37.044: INFO: Number of nodes with available pods: 0
Jan 21 00:28:37.044: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:38.047: INFO: Number of nodes with available pods: 0
Jan 21 00:28:38.047: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:39.926: INFO: Number of nodes with available pods: 0
Jan 21 00:28:39.926: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:40.530: INFO: Number of nodes with available pods: 0
Jan 21 00:28:40.531: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:41.055: INFO: Number of nodes with available pods: 0
Jan 21 00:28:41.055: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:42.046: INFO: Number of nodes with available pods: 0
Jan 21 00:28:42.046: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 00:28:43.045: INFO: Number of nodes with available pods: 1
Jan 21 00:28:43.046: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-907, will wait for the garbage collector to delete the pods
Jan 21 00:28:43.120: INFO: Deleting DaemonSet.extensions daemon-set took: 9.701742ms
Jan 21 00:28:43.421: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.418637ms
Jan 21 00:28:53.125: INFO: Number of nodes with available pods: 0
Jan 21 00:28:53.125: INFO: Number of running nodes: 0, number of available pods: 0
Jan 21 00:28:53.129: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-907/daemonsets","resourceVersion":"3294113"},"items":null}

Jan 21 00:28:53.132: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-907/pods","resourceVersion":"3294113"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:28:53.218: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-907" for this suite.

• [SLOW TEST:32.722 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":87,"skipped":1635,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:28:53.268: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name s-test-opt-del-9fc7bba4-9598-43df-90b5-e664eaaf9146
STEP: Creating secret with name s-test-opt-upd-3dbf2351-d0da-411a-9826-c8a815b9ecda
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-9fc7bba4-9598-43df-90b5-e664eaaf9146
STEP: Updating secret s-test-opt-upd-3dbf2351-d0da-411a-9826-c8a815b9ecda
STEP: Creating secret with name s-test-opt-create-296a650a-599e-4d4e-9807-3321d4bcc1e6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:29:07.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3416" for this suite.

• [SLOW TEST:14.577 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":88,"skipped":1647,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:29:07.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:29:19.068: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6982" for this suite.

• [SLOW TEST:11.237 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":89,"skipped":1654,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:29:19.084: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:29:19.364: INFO: Creating deployment "test-recreate-deployment"
Jan 21 00:29:19.381: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Jan 21 00:29:19.537: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Jan 21 00:29:21.551: INFO: Waiting deployment "test-recreate-deployment" to complete
Jan 21 00:29:21.555: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163359, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163359, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163359, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163359, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:29:23.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163359, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163359, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163359, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163359, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:29:25.562: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163359, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163359, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163359, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163359, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:29:27.567: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163359, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163359, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163359, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163359, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:29:29.565: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Jan 21 00:29:29.579: INFO: Updating deployment test-recreate-deployment
Jan 21 00:29:29.579: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:67
Jan 21 00:29:29.893: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-4850 /apis/apps/v1/namespaces/deployment-4850/deployments/test-recreate-deployment 3103b469-e1a5-40f4-a490-2ac67cb0ee2d 3294326 2 2020-01-21 00:29:19 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0056db418  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-21 00:29:29 +0000 UTC,LastTransitionTime:2020-01-21 00:29:29 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-01-21 00:29:29 +0000 UTC,LastTransitionTime:2020-01-21 00:29:19 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Jan 21 00:29:29.899: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-4850 /apis/apps/v1/namespaces/deployment-4850/replicasets/test-recreate-deployment-5f94c574ff a96eccbe-c159-48cd-a52b-8a5606c73c46 3294323 1 2020-01-21 00:29:29 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 3103b469-e1a5-40f4-a490-2ac67cb0ee2d 0xc005610b57 0xc005610b58}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005610bb8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 21 00:29:29.899: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Jan 21 00:29:29.899: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856  deployment-4850 /apis/apps/v1/namespaces/deployment-4850/replicasets/test-recreate-deployment-799c574856 9e1508d6-fe4a-4922-aea2-7fabbd800713 3294315 2 2020-01-21 00:29:19 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 3103b469-e1a5-40f4-a490-2ac67cb0ee2d 0xc005610c27 0xc005610c28}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005610c98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 21 00:29:29.903: INFO: Pod "test-recreate-deployment-5f94c574ff-htm6t" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-htm6t test-recreate-deployment-5f94c574ff- deployment-4850 /api/v1/namespaces/deployment-4850/pods/test-recreate-deployment-5f94c574ff-htm6t 0b2e580a-82c4-4797-8693-d46673a95587 3294322 0 2020-01-21 00:29:29 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff a96eccbe-c159-48cd-a52b-8a5606c73c46 0xc0056110f7 0xc0056110f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-xx6q7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-xx6q7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-xx6q7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 00:29:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:29:29.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-4850" for this suite.

• [SLOW TEST:10.837 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":90,"skipped":1706,"failed":0}
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:29:29.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 21 00:29:30.249: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f0ec3ff9-3a3c-44b5-b20b-72241405c5d0" in namespace "projected-3724" to be "success or failure"
Jan 21 00:29:30.335: INFO: Pod "downwardapi-volume-f0ec3ff9-3a3c-44b5-b20b-72241405c5d0": Phase="Pending", Reason="", readiness=false. Elapsed: 85.965728ms
Jan 21 00:29:32.363: INFO: Pod "downwardapi-volume-f0ec3ff9-3a3c-44b5-b20b-72241405c5d0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114251176s
Jan 21 00:29:34.392: INFO: Pod "downwardapi-volume-f0ec3ff9-3a3c-44b5-b20b-72241405c5d0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.14331315s
Jan 21 00:29:36.411: INFO: Pod "downwardapi-volume-f0ec3ff9-3a3c-44b5-b20b-72241405c5d0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.162077485s
Jan 21 00:29:38.420: INFO: Pod "downwardapi-volume-f0ec3ff9-3a3c-44b5-b20b-72241405c5d0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170966273s
Jan 21 00:29:40.432: INFO: Pod "downwardapi-volume-f0ec3ff9-3a3c-44b5-b20b-72241405c5d0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.182818991s
Jan 21 00:29:42.437: INFO: Pod "downwardapi-volume-f0ec3ff9-3a3c-44b5-b20b-72241405c5d0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.188509738s
STEP: Saw pod success
Jan 21 00:29:42.438: INFO: Pod "downwardapi-volume-f0ec3ff9-3a3c-44b5-b20b-72241405c5d0" satisfied condition "success or failure"
Jan 21 00:29:42.441: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-f0ec3ff9-3a3c-44b5-b20b-72241405c5d0 container client-container: 
STEP: delete the pod
Jan 21 00:29:42.526: INFO: Waiting for pod downwardapi-volume-f0ec3ff9-3a3c-44b5-b20b-72241405c5d0 to disappear
Jan 21 00:29:42.534: INFO: Pod downwardapi-volume-f0ec3ff9-3a3c-44b5-b20b-72241405c5d0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:29:42.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3724" for this suite.

• [SLOW TEST:12.622 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1713,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:29:42.545: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 21 00:29:51.969: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:29:52.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-9967" for this suite.

• [SLOW TEST:9.577 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1729,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:29:52.124: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:29:52.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating replication controller svc-latency-rc in namespace svc-latency-3881
I0121 00:29:52.403022       8 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3881, replica count: 1
I0121 00:29:53.455495       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 00:29:54.456419       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 00:29:55.457589       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 00:29:56.459220       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 00:29:57.460370       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 00:29:58.461571       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 00:29:59.462457       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 00:30:00.464044       8 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 21 00:30:00.610: INFO: Created: latency-svc-vq598
Jan 21 00:30:00.632: INFO: Got endpoints: latency-svc-vq598 [66.928102ms]
Jan 21 00:30:00.721: INFO: Created: latency-svc-w8wvk
Jan 21 00:30:00.754: INFO: Got endpoints: latency-svc-w8wvk [121.901434ms]
Jan 21 00:30:00.841: INFO: Created: latency-svc-694l9
Jan 21 00:30:00.863: INFO: Got endpoints: latency-svc-694l9 [230.327977ms]
Jan 21 00:30:01.020: INFO: Created: latency-svc-jktpp
Jan 21 00:30:01.027: INFO: Created: latency-svc-z6crl
Jan 21 00:30:01.027: INFO: Got endpoints: latency-svc-jktpp [394.105814ms]
Jan 21 00:30:01.034: INFO: Got endpoints: latency-svc-z6crl [401.886585ms]
Jan 21 00:30:01.068: INFO: Created: latency-svc-gk4sr
Jan 21 00:30:01.079: INFO: Got endpoints: latency-svc-gk4sr [445.543447ms]
Jan 21 00:30:01.105: INFO: Created: latency-svc-rfp8r
Jan 21 00:30:01.107: INFO: Got endpoints: latency-svc-rfp8r [474.352099ms]
Jan 21 00:30:01.219: INFO: Created: latency-svc-d7rrt
Jan 21 00:30:01.224: INFO: Got endpoints: latency-svc-d7rrt [591.88266ms]
Jan 21 00:30:01.253: INFO: Created: latency-svc-76gl2
Jan 21 00:30:01.265: INFO: Got endpoints: latency-svc-76gl2 [631.544318ms]
Jan 21 00:30:01.282: INFO: Created: latency-svc-7dpdd
Jan 21 00:30:01.292: INFO: Got endpoints: latency-svc-7dpdd [659.220058ms]
Jan 21 00:30:01.304: INFO: Created: latency-svc-8kmcd
Jan 21 00:30:01.311: INFO: Got endpoints: latency-svc-8kmcd [677.502637ms]
Jan 21 00:30:01.414: INFO: Created: latency-svc-99v8x
Jan 21 00:30:01.437: INFO: Got endpoints: latency-svc-99v8x [803.52033ms]
Jan 21 00:30:01.505: INFO: Created: latency-svc-wlx4n
Jan 21 00:30:01.610: INFO: Got endpoints: latency-svc-wlx4n [977.864154ms]
Jan 21 00:30:01.640: INFO: Created: latency-svc-gl4q9
Jan 21 00:30:01.662: INFO: Got endpoints: latency-svc-gl4q9 [1.029077946s]
Jan 21 00:30:01.688: INFO: Created: latency-svc-pccr8
Jan 21 00:30:01.696: INFO: Got endpoints: latency-svc-pccr8 [1.063952487s]
Jan 21 00:30:01.917: INFO: Created: latency-svc-zr4bg
Jan 21 00:30:01.935: INFO: Got endpoints: latency-svc-zr4bg [1.301687599s]
Jan 21 00:30:02.001: INFO: Created: latency-svc-xtsnz
Jan 21 00:30:02.058: INFO: Got endpoints: latency-svc-xtsnz [1.303513406s]
Jan 21 00:30:02.060: INFO: Created: latency-svc-glnfh
Jan 21 00:30:02.065: INFO: Got endpoints: latency-svc-glnfh [1.202238744s]
Jan 21 00:30:02.092: INFO: Created: latency-svc-wlbhs
Jan 21 00:30:02.103: INFO: Got endpoints: latency-svc-wlbhs [1.075689477s]
Jan 21 00:30:02.133: INFO: Created: latency-svc-2kgm4
Jan 21 00:30:02.139: INFO: Got endpoints: latency-svc-2kgm4 [1.104493578s]
Jan 21 00:30:02.183: INFO: Created: latency-svc-k6grm
Jan 21 00:30:02.191: INFO: Got endpoints: latency-svc-k6grm [1.112679701s]
Jan 21 00:30:02.217: INFO: Created: latency-svc-tsdst
Jan 21 00:30:02.231: INFO: Got endpoints: latency-svc-tsdst [1.122819924s]
Jan 21 00:30:02.269: INFO: Created: latency-svc-s9s2q
Jan 21 00:30:02.399: INFO: Got endpoints: latency-svc-s9s2q [1.174751263s]
Jan 21 00:30:02.402: INFO: Created: latency-svc-lltzp
Jan 21 00:30:02.410: INFO: Got endpoints: latency-svc-lltzp [1.145242146s]
Jan 21 00:30:02.428: INFO: Created: latency-svc-sdbtm
Jan 21 00:30:02.429: INFO: Got endpoints: latency-svc-sdbtm [1.136927568s]
Jan 21 00:30:02.445: INFO: Created: latency-svc-7vzbx
Jan 21 00:30:02.469: INFO: Got endpoints: latency-svc-7vzbx [1.157923313s]
Jan 21 00:30:02.472: INFO: Created: latency-svc-w6vq8
Jan 21 00:30:02.482: INFO: Got endpoints: latency-svc-w6vq8 [1.045700284s]
Jan 21 00:30:02.548: INFO: Created: latency-svc-clr8r
Jan 21 00:30:02.577: INFO: Created: latency-svc-zbzs6
Jan 21 00:30:02.578: INFO: Got endpoints: latency-svc-clr8r [967.874946ms]
Jan 21 00:30:02.600: INFO: Got endpoints: latency-svc-zbzs6 [937.215838ms]
Jan 21 00:30:02.628: INFO: Created: latency-svc-d7nfk
Jan 21 00:30:02.631: INFO: Got endpoints: latency-svc-d7nfk [934.684167ms]
Jan 21 00:30:02.727: INFO: Created: latency-svc-9c7zm
Jan 21 00:30:02.730: INFO: Got endpoints: latency-svc-9c7zm [794.142265ms]
Jan 21 00:30:02.761: INFO: Created: latency-svc-7rx28
Jan 21 00:30:02.766: INFO: Got endpoints: latency-svc-7rx28 [707.542022ms]
Jan 21 00:30:02.800: INFO: Created: latency-svc-5mf5k
Jan 21 00:30:02.810: INFO: Got endpoints: latency-svc-5mf5k [744.391029ms]
Jan 21 00:30:02.822: INFO: Created: latency-svc-lvpbk
Jan 21 00:30:02.880: INFO: Created: latency-svc-54fmw
Jan 21 00:30:02.880: INFO: Got endpoints: latency-svc-lvpbk [776.869108ms]
Jan 21 00:30:02.894: INFO: Got endpoints: latency-svc-54fmw [754.736679ms]
Jan 21 00:30:02.917: INFO: Created: latency-svc-68gvc
Jan 21 00:30:02.929: INFO: Got endpoints: latency-svc-68gvc [737.791771ms]
Jan 21 00:30:02.961: INFO: Created: latency-svc-zmx64
Jan 21 00:30:02.969: INFO: Got endpoints: latency-svc-zmx64 [738.204023ms]
Jan 21 00:30:03.031: INFO: Created: latency-svc-x4dql
Jan 21 00:30:03.042: INFO: Got endpoints: latency-svc-x4dql [642.219584ms]
Jan 21 00:30:03.173: INFO: Created: latency-svc-bfxpc
Jan 21 00:30:03.185: INFO: Got endpoints: latency-svc-bfxpc [774.924254ms]
Jan 21 00:30:03.209: INFO: Created: latency-svc-c7tz6
Jan 21 00:30:03.237: INFO: Got endpoints: latency-svc-c7tz6 [807.409176ms]
Jan 21 00:30:03.268: INFO: Created: latency-svc-8zjx4
Jan 21 00:30:03.320: INFO: Got endpoints: latency-svc-8zjx4 [851.261133ms]
Jan 21 00:30:03.334: INFO: Created: latency-svc-rlcz2
Jan 21 00:30:03.340: INFO: Got endpoints: latency-svc-rlcz2 [857.036637ms]
Jan 21 00:30:03.361: INFO: Created: latency-svc-hltxm
Jan 21 00:30:03.371: INFO: Got endpoints: latency-svc-hltxm [792.39381ms]
Jan 21 00:30:03.397: INFO: Created: latency-svc-w64px
Jan 21 00:30:03.397: INFO: Got endpoints: latency-svc-w64px [797.383854ms]
Jan 21 00:30:03.416: INFO: Created: latency-svc-czt87
Jan 21 00:30:03.417: INFO: Got endpoints: latency-svc-czt87 [786.197633ms]
Jan 21 00:30:03.543: INFO: Created: latency-svc-hw4gt
Jan 21 00:30:03.553: INFO: Got endpoints: latency-svc-hw4gt [823.39192ms]
Jan 21 00:30:03.584: INFO: Created: latency-svc-66q44
Jan 21 00:30:03.587: INFO: Got endpoints: latency-svc-66q44 [821.304247ms]
Jan 21 00:30:03.605: INFO: Created: latency-svc-xdxpk
Jan 21 00:30:03.622: INFO: Got endpoints: latency-svc-xdxpk [68.691345ms]
Jan 21 00:30:03.629: INFO: Created: latency-svc-rt4q6
Jan 21 00:30:03.629: INFO: Got endpoints: latency-svc-rt4q6 [818.98907ms]
Jan 21 00:30:03.707: INFO: Created: latency-svc-r4dl6
Jan 21 00:30:03.735: INFO: Got endpoints: latency-svc-r4dl6 [855.209714ms]
Jan 21 00:30:03.737: INFO: Created: latency-svc-429j7
Jan 21 00:30:03.790: INFO: Got endpoints: latency-svc-429j7 [896.345915ms]
Jan 21 00:30:03.923: INFO: Created: latency-svc-5gtxw
Jan 21 00:30:03.928: INFO: Got endpoints: latency-svc-5gtxw [998.232682ms]
Jan 21 00:30:04.010: INFO: Created: latency-svc-lpxwl
Jan 21 00:30:04.071: INFO: Got endpoints: latency-svc-lpxwl [1.10114514s]
Jan 21 00:30:04.138: INFO: Created: latency-svc-874jt
Jan 21 00:30:04.149: INFO: Got endpoints: latency-svc-874jt [1.106970942s]
Jan 21 00:30:04.224: INFO: Created: latency-svc-qlplj
Jan 21 00:30:04.251: INFO: Created: latency-svc-99nkv
Jan 21 00:30:04.253: INFO: Got endpoints: latency-svc-qlplj [1.068300596s]
Jan 21 00:30:04.258: INFO: Got endpoints: latency-svc-99nkv [1.020353203s]
Jan 21 00:30:04.287: INFO: Created: latency-svc-brjwb
Jan 21 00:30:04.299: INFO: Got endpoints: latency-svc-brjwb [978.917978ms]
Jan 21 00:30:04.320: INFO: Created: latency-svc-9rxf2
Jan 21 00:30:04.320: INFO: Got endpoints: latency-svc-9rxf2 [980.085257ms]
Jan 21 00:30:04.499: INFO: Created: latency-svc-tl88t
Jan 21 00:30:04.510: INFO: Got endpoints: latency-svc-tl88t [1.138732795s]
Jan 21 00:30:04.580: INFO: Created: latency-svc-6wr5b
Jan 21 00:30:04.584: INFO: Got endpoints: latency-svc-6wr5b [1.186510716s]
Jan 21 00:30:04.665: INFO: Created: latency-svc-x7m8c
Jan 21 00:30:04.696: INFO: Got endpoints: latency-svc-x7m8c [1.278466066s]
Jan 21 00:30:04.698: INFO: Created: latency-svc-w7658
Jan 21 00:30:04.712: INFO: Got endpoints: latency-svc-w7658 [1.124914622s]
Jan 21 00:30:04.748: INFO: Created: latency-svc-8vdqt
Jan 21 00:30:04.818: INFO: Got endpoints: latency-svc-8vdqt [1.195299063s]
Jan 21 00:30:04.836: INFO: Created: latency-svc-m7vsz
Jan 21 00:30:04.850: INFO: Got endpoints: latency-svc-m7vsz [1.220526383s]
Jan 21 00:30:04.876: INFO: Created: latency-svc-2g7kk
Jan 21 00:30:04.886: INFO: Got endpoints: latency-svc-2g7kk [1.150651631s]
Jan 21 00:30:04.918: INFO: Created: latency-svc-h9ggf
Jan 21 00:30:05.065: INFO: Got endpoints: latency-svc-h9ggf [1.27500763s]
Jan 21 00:30:05.078: INFO: Created: latency-svc-8g7l5
Jan 21 00:30:05.088: INFO: Got endpoints: latency-svc-8g7l5 [1.159294879s]
Jan 21 00:30:05.152: INFO: Created: latency-svc-qjp9t
Jan 21 00:30:05.226: INFO: Got endpoints: latency-svc-qjp9t [1.155247039s]
Jan 21 00:30:05.245: INFO: Created: latency-svc-jxp5t
Jan 21 00:30:05.249: INFO: Got endpoints: latency-svc-jxp5t [1.100283216s]
Jan 21 00:30:05.278: INFO: Created: latency-svc-96285
Jan 21 00:30:05.294: INFO: Got endpoints: latency-svc-96285 [1.040390506s]
Jan 21 00:30:05.316: INFO: Created: latency-svc-fh9bj
Jan 21 00:30:05.395: INFO: Got endpoints: latency-svc-fh9bj [1.137645611s]
Jan 21 00:30:05.400: INFO: Created: latency-svc-26lwm
Jan 21 00:30:05.414: INFO: Got endpoints: latency-svc-26lwm [1.114370812s]
Jan 21 00:30:05.443: INFO: Created: latency-svc-2slp2
Jan 21 00:30:05.457: INFO: Got endpoints: latency-svc-2slp2 [1.137012542s]
Jan 21 00:30:05.574: INFO: Created: latency-svc-vjwkr
Jan 21 00:30:05.587: INFO: Got endpoints: latency-svc-vjwkr [1.077496534s]
Jan 21 00:30:05.663: INFO: Created: latency-svc-xm8kp
Jan 21 00:30:05.669: INFO: Got endpoints: latency-svc-xm8kp [1.085200486s]
Jan 21 00:30:05.759: INFO: Created: latency-svc-jfqns
Jan 21 00:30:05.769: INFO: Got endpoints: latency-svc-jfqns [1.072979293s]
Jan 21 00:30:05.809: INFO: Created: latency-svc-kp8k7
Jan 21 00:30:05.825: INFO: Got endpoints: latency-svc-kp8k7 [1.112319918s]
Jan 21 00:30:05.956: INFO: Created: latency-svc-xjhm5
Jan 21 00:30:05.962: INFO: Got endpoints: latency-svc-xjhm5 [1.144706904s]
Jan 21 00:30:05.993: INFO: Created: latency-svc-2g2dq
Jan 21 00:30:06.001: INFO: Got endpoints: latency-svc-2g2dq [1.151722124s]
Jan 21 00:30:06.018: INFO: Created: latency-svc-jr6fm
Jan 21 00:30:06.021: INFO: Got endpoints: latency-svc-jr6fm [1.134082032s]
Jan 21 00:30:06.161: INFO: Created: latency-svc-98q26
Jan 21 00:30:06.170: INFO: Got endpoints: latency-svc-98q26 [1.104501153s]
Jan 21 00:30:06.220: INFO: Created: latency-svc-zckkh
Jan 21 00:30:06.225: INFO: Got endpoints: latency-svc-zckkh [1.137634012s]
Jan 21 00:30:06.252: INFO: Created: latency-svc-cdhg7
Jan 21 00:30:06.257: INFO: Got endpoints: latency-svc-cdhg7 [1.030827493s]
Jan 21 00:30:06.336: INFO: Created: latency-svc-gwfsf
Jan 21 00:30:06.360: INFO: Got endpoints: latency-svc-gwfsf [1.110354736s]
Jan 21 00:30:06.394: INFO: Created: latency-svc-cgnq7
Jan 21 00:30:06.408: INFO: Got endpoints: latency-svc-cgnq7 [1.113717968s]
Jan 21 00:30:06.432: INFO: Created: latency-svc-7hcpd
Jan 21 00:30:06.535: INFO: Got endpoints: latency-svc-7hcpd [1.139678135s]
Jan 21 00:30:06.635: INFO: Created: latency-svc-gkjgm
Jan 21 00:30:06.635: INFO: Created: latency-svc-lm8jf
Jan 21 00:30:06.678: INFO: Got endpoints: latency-svc-lm8jf [1.220592364s]
Jan 21 00:30:06.678: INFO: Got endpoints: latency-svc-gkjgm [1.263794978s]
Jan 21 00:30:06.688: INFO: Created: latency-svc-9lpdj
Jan 21 00:30:06.692: INFO: Got endpoints: latency-svc-9lpdj [1.104056616s]
Jan 21 00:30:06.716: INFO: Created: latency-svc-582lm
Jan 21 00:30:06.722: INFO: Got endpoints: latency-svc-582lm [1.052973455s]
Jan 21 00:30:06.755: INFO: Created: latency-svc-2j55t
Jan 21 00:30:06.768: INFO: Got endpoints: latency-svc-2j55t [998.707526ms]
Jan 21 00:30:06.844: INFO: Created: latency-svc-6mzn6
Jan 21 00:30:06.852: INFO: Got endpoints: latency-svc-6mzn6 [1.027002427s]
Jan 21 00:30:06.911: INFO: Created: latency-svc-fsv4n
Jan 21 00:30:06.924: INFO: Got endpoints: latency-svc-fsv4n [961.203572ms]
Jan 21 00:30:07.117: INFO: Created: latency-svc-wqztd
Jan 21 00:30:07.124: INFO: Got endpoints: latency-svc-wqztd [1.122438389s]
Jan 21 00:30:07.176: INFO: Created: latency-svc-clbts
Jan 21 00:30:07.205: INFO: Got endpoints: latency-svc-clbts [1.184936232s]
Jan 21 00:30:07.378: INFO: Created: latency-svc-qfvpk
Jan 21 00:30:07.378: INFO: Got endpoints: latency-svc-qfvpk [1.208031732s]
Jan 21 00:30:07.458: INFO: Created: latency-svc-v7sps
Jan 21 00:30:07.612: INFO: Got endpoints: latency-svc-v7sps [1.386806883s]
Jan 21 00:30:07.616: INFO: Created: latency-svc-5d9th
Jan 21 00:30:07.793: INFO: Got endpoints: latency-svc-5d9th [1.535023112s]
Jan 21 00:30:07.817: INFO: Created: latency-svc-5jbr8
Jan 21 00:30:07.866: INFO: Created: latency-svc-br8kv
Jan 21 00:30:07.868: INFO: Got endpoints: latency-svc-5jbr8 [1.507870938s]
Jan 21 00:30:08.066: INFO: Got endpoints: latency-svc-br8kv [1.658739181s]
Jan 21 00:30:08.246: INFO: Created: latency-svc-n6hng
Jan 21 00:30:08.249: INFO: Got endpoints: latency-svc-n6hng [1.713628613s]
Jan 21 00:30:08.289: INFO: Created: latency-svc-kdsbk
Jan 21 00:30:08.322: INFO: Got endpoints: latency-svc-kdsbk [1.64380094s]
Jan 21 00:30:08.450: INFO: Created: latency-svc-c4p6p
Jan 21 00:30:08.457: INFO: Got endpoints: latency-svc-c4p6p [1.779160684s]
Jan 21 00:30:08.493: INFO: Created: latency-svc-vn2wf
Jan 21 00:30:08.502: INFO: Got endpoints: latency-svc-vn2wf [1.810252315s]
Jan 21 00:30:08.534: INFO: Created: latency-svc-hs7b7
Jan 21 00:30:08.589: INFO: Got endpoints: latency-svc-hs7b7 [1.865878342s]
Jan 21 00:30:08.623: INFO: Created: latency-svc-zdhx5
Jan 21 00:30:08.626: INFO: Got endpoints: latency-svc-zdhx5 [1.857847757s]
Jan 21 00:30:08.656: INFO: Created: latency-svc-dc2gn
Jan 21 00:30:08.667: INFO: Got endpoints: latency-svc-dc2gn [1.814881442s]
Jan 21 00:30:08.755: INFO: Created: latency-svc-7d82x
Jan 21 00:30:08.761: INFO: Got endpoints: latency-svc-7d82x [1.836800672s]
Jan 21 00:30:08.836: INFO: Created: latency-svc-ggqhm
Jan 21 00:30:08.844: INFO: Got endpoints: latency-svc-ggqhm [1.719878533s]
Jan 21 00:30:08.909: INFO: Created: latency-svc-bx8fr
Jan 21 00:30:08.909: INFO: Got endpoints: latency-svc-bx8fr [1.703370645s]
Jan 21 00:30:08.941: INFO: Created: latency-svc-tgm6v
Jan 21 00:30:08.967: INFO: Got endpoints: latency-svc-tgm6v [1.588848738s]
Jan 21 00:30:08.971: INFO: Created: latency-svc-sqz85
Jan 21 00:30:08.989: INFO: Got endpoints: latency-svc-sqz85 [1.376783189s]
Jan 21 00:30:09.055: INFO: Created: latency-svc-f9bmn
Jan 21 00:30:09.064: INFO: Got endpoints: latency-svc-f9bmn [1.270791311s]
Jan 21 00:30:09.140: INFO: Created: latency-svc-gvxqb
Jan 21 00:30:09.269: INFO: Got endpoints: latency-svc-gvxqb [1.400889039s]
Jan 21 00:30:09.290: INFO: Created: latency-svc-bh8qg
Jan 21 00:30:09.307: INFO: Got endpoints: latency-svc-bh8qg [1.240472343s]
Jan 21 00:30:09.345: INFO: Created: latency-svc-wcwrc
Jan 21 00:30:09.350: INFO: Got endpoints: latency-svc-wcwrc [1.10125107s]
Jan 21 00:30:09.420: INFO: Created: latency-svc-md2z2
Jan 21 00:30:09.421: INFO: Got endpoints: latency-svc-md2z2 [1.099188974s]
Jan 21 00:30:09.452: INFO: Created: latency-svc-8fnjg
Jan 21 00:30:09.470: INFO: Got endpoints: latency-svc-8fnjg [1.012008546s]
Jan 21 00:30:09.492: INFO: Created: latency-svc-97brj
Jan 21 00:30:09.496: INFO: Got endpoints: latency-svc-97brj [993.491565ms]
Jan 21 00:30:09.559: INFO: Created: latency-svc-8h95g
Jan 21 00:30:09.576: INFO: Got endpoints: latency-svc-8h95g [986.613696ms]
Jan 21 00:30:09.584: INFO: Created: latency-svc-j7fbz
Jan 21 00:30:09.729: INFO: Got endpoints: latency-svc-j7fbz [1.102556691s]
Jan 21 00:30:09.730: INFO: Created: latency-svc-l5s5t
Jan 21 00:30:09.780: INFO: Created: latency-svc-wm2q7
Jan 21 00:30:09.784: INFO: Got endpoints: latency-svc-l5s5t [1.116438269s]
Jan 21 00:30:09.789: INFO: Got endpoints: latency-svc-wm2q7 [1.028198491s]
Jan 21 00:30:09.819: INFO: Created: latency-svc-w8c7l
Jan 21 00:30:09.825: INFO: Got endpoints: latency-svc-w8c7l [980.218976ms]
Jan 21 00:30:09.886: INFO: Created: latency-svc-vrf4c
Jan 21 00:30:09.893: INFO: Got endpoints: latency-svc-vrf4c [983.432508ms]
Jan 21 00:30:09.935: INFO: Created: latency-svc-94rvg
Jan 21 00:30:09.948: INFO: Got endpoints: latency-svc-94rvg [981.074879ms]
Jan 21 00:30:09.963: INFO: Created: latency-svc-6hrwt
Jan 21 00:30:09.976: INFO: Got endpoints: latency-svc-6hrwt [987.170028ms]
Jan 21 00:30:10.146: INFO: Created: latency-svc-vd2t5
Jan 21 00:30:10.169: INFO: Got endpoints: latency-svc-vd2t5 [1.104549561s]
Jan 21 00:30:10.513: INFO: Created: latency-svc-j6m9q
Jan 21 00:30:10.531: INFO: Got endpoints: latency-svc-j6m9q [1.261625937s]
Jan 21 00:30:10.598: INFO: Created: latency-svc-qnwvm
Jan 21 00:30:10.844: INFO: Got endpoints: latency-svc-qnwvm [1.536216883s]
Jan 21 00:30:10.873: INFO: Created: latency-svc-ds78j
Jan 21 00:30:10.878: INFO: Got endpoints: latency-svc-ds78j [1.527250105s]
Jan 21 00:30:10.925: INFO: Created: latency-svc-6k8bl
Jan 21 00:30:10.931: INFO: Got endpoints: latency-svc-6k8bl [1.509836097s]
Jan 21 00:30:11.011: INFO: Created: latency-svc-777dc
Jan 21 00:30:11.017: INFO: Got endpoints: latency-svc-777dc [1.54731913s]
Jan 21 00:30:11.040: INFO: Created: latency-svc-qq6c2
Jan 21 00:30:11.052: INFO: Got endpoints: latency-svc-qq6c2 [1.556014627s]
Jan 21 00:30:11.090: INFO: Created: latency-svc-jcrr8
Jan 21 00:30:11.204: INFO: Got endpoints: latency-svc-jcrr8 [1.627935452s]
Jan 21 00:30:11.207: INFO: Created: latency-svc-z6hf5
Jan 21 00:30:11.233: INFO: Got endpoints: latency-svc-z6hf5 [1.503773079s]
Jan 21 00:30:11.271: INFO: Created: latency-svc-c6nxk
Jan 21 00:30:11.298: INFO: Got endpoints: latency-svc-c6nxk [1.514880952s]
Jan 21 00:30:11.301: INFO: Created: latency-svc-pwwwc
Jan 21 00:30:11.376: INFO: Got endpoints: latency-svc-pwwwc [1.5870338s]
Jan 21 00:30:11.381: INFO: Created: latency-svc-mkzjt
Jan 21 00:30:11.412: INFO: Got endpoints: latency-svc-mkzjt [1.587151933s]
Jan 21 00:30:11.433: INFO: Created: latency-svc-c4hsc
Jan 21 00:30:11.441: INFO: Got endpoints: latency-svc-c4hsc [1.548239978s]
Jan 21 00:30:11.444: INFO: Created: latency-svc-t8698
Jan 21 00:30:11.470: INFO: Got endpoints: latency-svc-t8698 [1.521958865s]
Jan 21 00:30:11.475: INFO: Created: latency-svc-zw54b
Jan 21 00:30:11.588: INFO: Got endpoints: latency-svc-zw54b [1.610987739s]
Jan 21 00:30:11.596: INFO: Created: latency-svc-xbnpg
Jan 21 00:30:11.603: INFO: Got endpoints: latency-svc-xbnpg [1.433846185s]
Jan 21 00:30:11.624: INFO: Created: latency-svc-q2fx5
Jan 21 00:30:11.627: INFO: Got endpoints: latency-svc-q2fx5 [1.095645301s]
Jan 21 00:30:11.650: INFO: Created: latency-svc-ng2nr
Jan 21 00:30:11.667: INFO: Created: latency-svc-cvq9n
Jan 21 00:30:11.667: INFO: Got endpoints: latency-svc-ng2nr [823.500387ms]
Jan 21 00:30:11.683: INFO: Got endpoints: latency-svc-cvq9n [804.537126ms]
Jan 21 00:30:11.746: INFO: Created: latency-svc-f4krp
Jan 21 00:30:11.752: INFO: Got endpoints: latency-svc-f4krp [820.539437ms]
Jan 21 00:30:11.787: INFO: Created: latency-svc-z797q
Jan 21 00:30:11.795: INFO: Got endpoints: latency-svc-z797q [777.811854ms]
Jan 21 00:30:11.824: INFO: Created: latency-svc-75dwv
Jan 21 00:30:11.897: INFO: Got endpoints: latency-svc-75dwv [844.35836ms]
Jan 21 00:30:11.931: INFO: Created: latency-svc-vvdc6
Jan 21 00:30:11.934: INFO: Got endpoints: latency-svc-vvdc6 [730.00189ms]
Jan 21 00:30:11.975: INFO: Created: latency-svc-rjptc
Jan 21 00:30:11.983: INFO: Got endpoints: latency-svc-rjptc [749.279858ms]
Jan 21 00:30:12.067: INFO: Created: latency-svc-f69rf
Jan 21 00:30:12.104: INFO: Got endpoints: latency-svc-f69rf [805.567804ms]
Jan 21 00:30:12.113: INFO: Created: latency-svc-m8clf
Jan 21 00:30:12.117: INFO: Got endpoints: latency-svc-m8clf [741.118532ms]
Jan 21 00:30:12.143: INFO: Created: latency-svc-fz572
Jan 21 00:30:12.156: INFO: Got endpoints: latency-svc-fz572 [743.747067ms]
Jan 21 00:30:12.231: INFO: Created: latency-svc-77zhd
Jan 21 00:30:12.231: INFO: Got endpoints: latency-svc-77zhd [789.99448ms]
Jan 21 00:30:12.407: INFO: Created: latency-svc-vvctt
Jan 21 00:30:12.413: INFO: Got endpoints: latency-svc-vvctt [942.110756ms]
Jan 21 00:30:12.438: INFO: Created: latency-svc-9bjtm
Jan 21 00:30:12.447: INFO: Got endpoints: latency-svc-9bjtm [859.678286ms]
Jan 21 00:30:12.470: INFO: Created: latency-svc-snnfp
Jan 21 00:30:12.477: INFO: Got endpoints: latency-svc-snnfp [874.507525ms]
Jan 21 00:30:12.610: INFO: Created: latency-svc-htfgh
Jan 21 00:30:12.613: INFO: Got endpoints: latency-svc-htfgh [985.869657ms]
Jan 21 00:30:12.650: INFO: Created: latency-svc-7rkvv
Jan 21 00:30:12.665: INFO: Got endpoints: latency-svc-7rkvv [997.276118ms]
Jan 21 00:30:12.694: INFO: Created: latency-svc-fq99l
Jan 21 00:30:12.698: INFO: Got endpoints: latency-svc-fq99l [1.015498632s]
Jan 21 00:30:12.757: INFO: Created: latency-svc-dt58l
Jan 21 00:30:12.772: INFO: Got endpoints: latency-svc-dt58l [1.019855163s]
Jan 21 00:30:12.844: INFO: Created: latency-svc-t5hm8
Jan 21 00:30:12.909: INFO: Got endpoints: latency-svc-t5hm8 [1.113483879s]
Jan 21 00:30:12.916: INFO: Created: latency-svc-wt684
Jan 21 00:30:12.924: INFO: Got endpoints: latency-svc-wt684 [1.02712656s]
Jan 21 00:30:12.989: INFO: Created: latency-svc-cbn8b
Jan 21 00:30:12.995: INFO: Got endpoints: latency-svc-cbn8b [1.060476074s]
Jan 21 00:30:13.051: INFO: Created: latency-svc-k8vmt
Jan 21 00:30:13.055: INFO: Got endpoints: latency-svc-k8vmt [1.072657199s]
Jan 21 00:30:13.088: INFO: Created: latency-svc-6t9wz
Jan 21 00:30:13.195: INFO: Got endpoints: latency-svc-6t9wz [1.090548912s]
Jan 21 00:30:13.204: INFO: Created: latency-svc-fp5lj
Jan 21 00:30:13.210: INFO: Got endpoints: latency-svc-fp5lj [1.092732833s]
Jan 21 00:30:13.248: INFO: Created: latency-svc-8w7nt
Jan 21 00:30:13.255: INFO: Got endpoints: latency-svc-8w7nt [1.098061197s]
Jan 21 00:30:13.349: INFO: Created: latency-svc-nlk2m
Jan 21 00:30:13.370: INFO: Got endpoints: latency-svc-nlk2m [1.138140082s]
Jan 21 00:30:13.393: INFO: Created: latency-svc-rpcb2
Jan 21 00:30:13.408: INFO: Got endpoints: latency-svc-rpcb2 [995.08109ms]
Jan 21 00:30:13.426: INFO: Created: latency-svc-5mhhz
Jan 21 00:30:13.429: INFO: Got endpoints: latency-svc-5mhhz [981.590044ms]
Jan 21 00:30:13.491: INFO: Created: latency-svc-jlhbx
Jan 21 00:30:13.493: INFO: Got endpoints: latency-svc-jlhbx [1.015817278s]
Jan 21 00:30:13.518: INFO: Created: latency-svc-djw76
Jan 21 00:30:13.523: INFO: Got endpoints: latency-svc-djw76 [910.03049ms]
Jan 21 00:30:13.548: INFO: Created: latency-svc-z2l2m
Jan 21 00:30:13.554: INFO: Got endpoints: latency-svc-z2l2m [888.692875ms]
Jan 21 00:30:13.579: INFO: Created: latency-svc-p2sbs
Jan 21 00:30:13.588: INFO: Got endpoints: latency-svc-p2sbs [889.700273ms]
Jan 21 00:30:13.723: INFO: Created: latency-svc-r6b6k
Jan 21 00:30:13.736: INFO: Got endpoints: latency-svc-r6b6k [963.652953ms]
Jan 21 00:30:13.756: INFO: Created: latency-svc-p9vjb
Jan 21 00:30:13.762: INFO: Got endpoints: latency-svc-p9vjb [852.351681ms]
Jan 21 00:30:13.794: INFO: Created: latency-svc-6zqqh
Jan 21 00:30:13.880: INFO: Got endpoints: latency-svc-6zqqh [955.839642ms]
Jan 21 00:30:13.885: INFO: Created: latency-svc-nj9g8
Jan 21 00:30:13.887: INFO: Got endpoints: latency-svc-nj9g8 [892.15866ms]
Jan 21 00:30:13.922: INFO: Created: latency-svc-c25wm
Jan 21 00:30:13.926: INFO: Got endpoints: latency-svc-c25wm [871.036283ms]
Jan 21 00:30:13.952: INFO: Created: latency-svc-69zsf
Jan 21 00:30:14.058: INFO: Got endpoints: latency-svc-69zsf [862.454504ms]
Jan 21 00:30:14.061: INFO: Created: latency-svc-b66nb
Jan 21 00:30:14.104: INFO: Got endpoints: latency-svc-b66nb [893.167945ms]
Jan 21 00:30:14.219: INFO: Created: latency-svc-hd65x
Jan 21 00:30:14.226: INFO: Got endpoints: latency-svc-hd65x [970.786338ms]
Jan 21 00:30:14.256: INFO: Created: latency-svc-9kqdq
Jan 21 00:30:14.263: INFO: Got endpoints: latency-svc-9kqdq [893.083256ms]
Jan 21 00:30:14.313: INFO: Created: latency-svc-wnxvh
Jan 21 00:30:14.316: INFO: Got endpoints: latency-svc-wnxvh [908.007845ms]
Jan 21 00:30:14.387: INFO: Created: latency-svc-x9s9q
Jan 21 00:30:14.393: INFO: Got endpoints: latency-svc-x9s9q [963.852966ms]
Jan 21 00:30:14.420: INFO: Created: latency-svc-stkhk
Jan 21 00:30:14.423: INFO: Got endpoints: latency-svc-stkhk [930.145217ms]
Jan 21 00:30:14.450: INFO: Created: latency-svc-dldfr
Jan 21 00:30:14.541: INFO: Got endpoints: latency-svc-dldfr [1.017733911s]
Jan 21 00:30:14.570: INFO: Created: latency-svc-h6p6p
Jan 21 00:30:14.573: INFO: Got endpoints: latency-svc-h6p6p [1.018429981s]
Jan 21 00:30:14.693: INFO: Created: latency-svc-qhpp2
Jan 21 00:30:14.723: INFO: Got endpoints: latency-svc-qhpp2 [1.134642621s]
Jan 21 00:30:14.727: INFO: Created: latency-svc-hmlt7
Jan 21 00:30:14.732: INFO: Got endpoints: latency-svc-hmlt7 [996.277371ms]
Jan 21 00:30:14.752: INFO: Created: latency-svc-tmhdn
Jan 21 00:30:14.762: INFO: Got endpoints: latency-svc-tmhdn [999.871948ms]
Jan 21 00:30:14.784: INFO: Created: latency-svc-rwpsx
Jan 21 00:30:14.787: INFO: Got endpoints: latency-svc-rwpsx [906.366119ms]
Jan 21 00:30:14.838: INFO: Created: latency-svc-8lgwx
Jan 21 00:30:14.842: INFO: Got endpoints: latency-svc-8lgwx [954.592246ms]
Jan 21 00:30:14.884: INFO: Created: latency-svc-9shgs
Jan 21 00:30:14.891: INFO: Got endpoints: latency-svc-9shgs [964.11091ms]
Jan 21 00:30:14.925: INFO: Created: latency-svc-75tlc
Jan 21 00:30:14.927: INFO: Got endpoints: latency-svc-75tlc [868.371548ms]
Jan 21 00:30:15.033: INFO: Created: latency-svc-h6dhr
Jan 21 00:30:15.068: INFO: Got endpoints: latency-svc-h6dhr [964.569123ms]
Jan 21 00:30:15.090: INFO: Created: latency-svc-9swhn
Jan 21 00:30:15.108: INFO: Got endpoints: latency-svc-9swhn [882.208289ms]
Jan 21 00:30:15.205: INFO: Created: latency-svc-2w656
Jan 21 00:30:15.211: INFO: Got endpoints: latency-svc-2w656 [948.033752ms]
Jan 21 00:30:15.252: INFO: Created: latency-svc-vzf9x
Jan 21 00:30:15.256: INFO: Got endpoints: latency-svc-vzf9x [939.693333ms]
Jan 21 00:30:15.256: INFO: Latencies: [68.691345ms 121.901434ms 230.327977ms 394.105814ms 401.886585ms 445.543447ms 474.352099ms 591.88266ms 631.544318ms 642.219584ms 659.220058ms 677.502637ms 707.542022ms 730.00189ms 737.791771ms 738.204023ms 741.118532ms 743.747067ms 744.391029ms 749.279858ms 754.736679ms 774.924254ms 776.869108ms 777.811854ms 786.197633ms 789.99448ms 792.39381ms 794.142265ms 797.383854ms 803.52033ms 804.537126ms 805.567804ms 807.409176ms 818.98907ms 820.539437ms 821.304247ms 823.39192ms 823.500387ms 844.35836ms 851.261133ms 852.351681ms 855.209714ms 857.036637ms 859.678286ms 862.454504ms 868.371548ms 871.036283ms 874.507525ms 882.208289ms 888.692875ms 889.700273ms 892.15866ms 893.083256ms 893.167945ms 896.345915ms 906.366119ms 908.007845ms 910.03049ms 930.145217ms 934.684167ms 937.215838ms 939.693333ms 942.110756ms 948.033752ms 954.592246ms 955.839642ms 961.203572ms 963.652953ms 963.852966ms 964.11091ms 964.569123ms 967.874946ms 970.786338ms 977.864154ms 978.917978ms 980.085257ms 980.218976ms 981.074879ms 981.590044ms 983.432508ms 985.869657ms 986.613696ms 987.170028ms 993.491565ms 995.08109ms 996.277371ms 997.276118ms 998.232682ms 998.707526ms 999.871948ms 1.012008546s 1.015498632s 1.015817278s 1.017733911s 1.018429981s 1.019855163s 1.020353203s 1.027002427s 1.02712656s 1.028198491s 1.029077946s 1.030827493s 1.040390506s 1.045700284s 1.052973455s 1.060476074s 1.063952487s 1.068300596s 1.072657199s 1.072979293s 1.075689477s 1.077496534s 1.085200486s 1.090548912s 1.092732833s 1.095645301s 1.098061197s 1.099188974s 1.100283216s 1.10114514s 1.10125107s 1.102556691s 1.104056616s 1.104493578s 1.104501153s 1.104549561s 1.106970942s 1.110354736s 1.112319918s 1.112679701s 1.113483879s 1.113717968s 1.114370812s 1.116438269s 1.122438389s 1.122819924s 1.124914622s 1.134082032s 1.134642621s 1.136927568s 1.137012542s 1.137634012s 1.137645611s 1.138140082s 1.138732795s 1.139678135s 1.144706904s 1.145242146s 1.150651631s 1.151722124s 1.155247039s 1.157923313s 1.159294879s 1.174751263s 1.184936232s 1.186510716s 1.195299063s 1.202238744s 1.208031732s 1.220526383s 1.220592364s 1.240472343s 1.261625937s 1.263794978s 1.270791311s 1.27500763s 1.278466066s 1.301687599s 1.303513406s 1.376783189s 1.386806883s 1.400889039s 1.433846185s 1.503773079s 1.507870938s 1.509836097s 1.514880952s 1.521958865s 1.527250105s 1.535023112s 1.536216883s 1.54731913s 1.548239978s 1.556014627s 1.5870338s 1.587151933s 1.588848738s 1.610987739s 1.627935452s 1.64380094s 1.658739181s 1.703370645s 1.713628613s 1.719878533s 1.779160684s 1.810252315s 1.814881442s 1.836800672s 1.857847757s 1.865878342s]
Jan 21 00:30:15.257: INFO: 50 %ile: 1.029077946s
Jan 21 00:30:15.257: INFO: 90 %ile: 1.536216883s
Jan 21 00:30:15.257: INFO: 99 %ile: 1.857847757s
Jan 21 00:30:15.257: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:30:15.257: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-3881" for this suite.

• [SLOW TEST:23.148 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Service endpoints latency should not be very high  [Conformance]","total":278,"completed":93,"skipped":1758,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:30:15.273: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:30:15.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8480" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691
•{"msg":"PASSED [sig-network] Services should provide secure master service  [Conformance]","total":278,"completed":94,"skipped":1780,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:30:15.356: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to update and delete ResourceQuota. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota
STEP: Getting a ResourceQuota
STEP: Updating a ResourceQuota
STEP: Verifying a ResourceQuota was modified
STEP: Deleting a ResourceQuota
STEP: Verifying the deleted ResourceQuota
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:30:15.489: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-4027" for this suite.
•{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":95,"skipped":1806,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:30:15.505: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:30:57.572: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5549" for this suite.

• [SLOW TEST:42.117 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":96,"skipped":1813,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:30:57.625: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:30:57.922: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"92275d21-d8e6-4dbf-be78-9e9d7b480dc7", Controller:(*bool)(0xc0020e1756), BlockOwnerDeletion:(*bool)(0xc0020e1757)}}
Jan 21 00:30:58.030: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"e20c2a88-2a58-4390-bca7-5a7052c442e4", Controller:(*bool)(0xc0003434e2), BlockOwnerDeletion:(*bool)(0xc0003434e3)}}
Jan 21 00:30:58.041: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"83c111b4-d3af-4608-877b-9c7cfacc4b1e", Controller:(*bool)(0xc0003437b6), BlockOwnerDeletion:(*bool)(0xc0003437b7)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:31:03.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-309" for this suite.

• [SLOW TEST:5.534 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":97,"skipped":1855,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:31:03.161: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 21 00:31:04.367: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 21 00:31:06.390: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163464, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163464, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163464, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163464, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:31:08.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163464, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163464, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163464, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163464, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:31:10.396: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163464, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163464, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163464, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163464, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:31:12.745: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163464, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163464, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163464, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163464, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 21 00:31:15.530: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:31:15.537: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the custom resource webhook via the AdmissionRegistration API
STEP: Creating a custom resource that should be denied by the webhook
STEP: Creating a custom resource whose deletion would be denied by the webhook
STEP: Updating the custom resource with disallowed data should be denied
STEP: Deleting the custom resource should be denied
STEP: Remove the offending key and value from the custom resource data
STEP: Deleting the updated custom resource should be successful
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:31:16.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9888" for this suite.
STEP: Destroying namespace "webhook-9888-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.913 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny custom resource creation, update and deletion [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":98,"skipped":1945,"failed":0}
SS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:31:17.074: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Jan 21 00:31:17.317: INFO: Pod name pod-release: Found 0 pods out of 1
Jan 21 00:31:22.364: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:31:22.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-320" for this suite.

• [SLOW TEST:5.465 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":99,"skipped":1947,"failed":0}
SSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:31:22.541: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 21 00:31:22.724: INFO: Waiting up to 5m0s for pod "pod-1d096554-fad0-4f0a-800a-a4fdd4495d36" in namespace "emptydir-8592" to be "success or failure"
Jan 21 00:31:22.772: INFO: Pod "pod-1d096554-fad0-4f0a-800a-a4fdd4495d36": Phase="Pending", Reason="", readiness=false. Elapsed: 47.773132ms
Jan 21 00:31:24.776: INFO: Pod "pod-1d096554-fad0-4f0a-800a-a4fdd4495d36": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05226002s
Jan 21 00:31:26.783: INFO: Pod "pod-1d096554-fad0-4f0a-800a-a4fdd4495d36": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058626426s
Jan 21 00:31:28.789: INFO: Pod "pod-1d096554-fad0-4f0a-800a-a4fdd4495d36": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065325786s
Jan 21 00:31:30.795: INFO: Pod "pod-1d096554-fad0-4f0a-800a-a4fdd4495d36": Phase="Pending", Reason="", readiness=false. Elapsed: 8.070694062s
Jan 21 00:31:32.803: INFO: Pod "pod-1d096554-fad0-4f0a-800a-a4fdd4495d36": Phase="Pending", Reason="", readiness=false. Elapsed: 10.078776117s
Jan 21 00:31:34.808: INFO: Pod "pod-1d096554-fad0-4f0a-800a-a4fdd4495d36": Phase="Pending", Reason="", readiness=false. Elapsed: 12.084192119s
Jan 21 00:31:36.817: INFO: Pod "pod-1d096554-fad0-4f0a-800a-a4fdd4495d36": Phase="Pending", Reason="", readiness=false. Elapsed: 14.092791978s
Jan 21 00:31:38.828: INFO: Pod "pod-1d096554-fad0-4f0a-800a-a4fdd4495d36": Phase="Succeeded", Reason="", readiness=false. Elapsed: 16.10371178s
STEP: Saw pod success
Jan 21 00:31:38.828: INFO: Pod "pod-1d096554-fad0-4f0a-800a-a4fdd4495d36" satisfied condition "success or failure"
Jan 21 00:31:38.832: INFO: Trying to get logs from node jerma-node pod pod-1d096554-fad0-4f0a-800a-a4fdd4495d36 container test-container: 
STEP: delete the pod
Jan 21 00:31:38.985: INFO: Waiting for pod pod-1d096554-fad0-4f0a-800a-a4fdd4495d36 to disappear
Jan 21 00:31:38.996: INFO: Pod pod-1d096554-fad0-4f0a-800a-a4fdd4495d36 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:31:38.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8592" for this suite.

• [SLOW TEST:16.470 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":100,"skipped":1952,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:31:39.013: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name secret-emptykey-test-1f578512-af4c-4ff1-8c3d-0e62ccd90650
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:31:39.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1329" for this suite.
•{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":101,"skipped":1964,"failed":0}
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:31:39.244: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 21 00:31:40.632: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 21 00:31:42.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163500, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163500, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163500, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163500, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:31:44.665: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163500, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163500, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163500, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163500, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:31:46.666: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163500, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163500, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163500, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163500, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:31:48.670: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163500, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163500, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163500, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163500, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 21 00:31:51.709: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:31:51.716: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-2893-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:31:52.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-310" for this suite.
STEP: Destroying namespace "webhook-310-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:13.944 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":102,"skipped":1968,"failed":0}
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:31:53.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:31:53.252: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Jan 21 00:31:53.441: INFO: stderr: ""
Jan 21 00:31:53.441: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"18+\", GitVersion:\"v1.18.0-alpha.1.106+4f70231ce7736c\", GitCommit:\"4f70231ce7736cc748f76526c98955f86c667a41\", GitTreeState:\"clean\", BuildDate:\"2019-12-22T17:08:54Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.0\", GitCommit:\"70132b0f130acc0bed193d9ba59dd186f0e634cf\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T21:12:17Z\", GoVersion:\"go1.13.4\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:31:53.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8798" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed  [Conformance]","total":278,"completed":103,"skipped":1968,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:31:53.474: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-2c17a39f-1824-46c1-a714-eca5eb7d58de
STEP: Creating a pod to test consume configMaps
Jan 21 00:31:53.617: INFO: Waiting up to 5m0s for pod "pod-configmaps-b7db09ee-68de-40a7-88d7-0b118d22ad56" in namespace "configmap-8669" to be "success or failure"
Jan 21 00:31:53.634: INFO: Pod "pod-configmaps-b7db09ee-68de-40a7-88d7-0b118d22ad56": Phase="Pending", Reason="", readiness=false. Elapsed: 16.762554ms
Jan 21 00:31:55.647: INFO: Pod "pod-configmaps-b7db09ee-68de-40a7-88d7-0b118d22ad56": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029661175s
Jan 21 00:31:57.656: INFO: Pod "pod-configmaps-b7db09ee-68de-40a7-88d7-0b118d22ad56": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03842987s
Jan 21 00:31:59.663: INFO: Pod "pod-configmaps-b7db09ee-68de-40a7-88d7-0b118d22ad56": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045742687s
Jan 21 00:32:01.671: INFO: Pod "pod-configmaps-b7db09ee-68de-40a7-88d7-0b118d22ad56": Phase="Pending", Reason="", readiness=false. Elapsed: 8.053033495s
Jan 21 00:32:03.683: INFO: Pod "pod-configmaps-b7db09ee-68de-40a7-88d7-0b118d22ad56": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065029556s
STEP: Saw pod success
Jan 21 00:32:03.683: INFO: Pod "pod-configmaps-b7db09ee-68de-40a7-88d7-0b118d22ad56" satisfied condition "success or failure"
Jan 21 00:32:03.688: INFO: Trying to get logs from node jerma-node pod pod-configmaps-b7db09ee-68de-40a7-88d7-0b118d22ad56 container configmap-volume-test: 
STEP: delete the pod
Jan 21 00:32:03.730: INFO: Waiting for pod pod-configmaps-b7db09ee-68de-40a7-88d7-0b118d22ad56 to disappear
Jan 21 00:32:03.746: INFO: Pod pod-configmaps-b7db09ee-68de-40a7-88d7-0b118d22ad56 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:32:03.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-8669" for this suite.

• [SLOW TEST:10.284 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":104,"skipped":1987,"failed":0}
S
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:32:03.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-1312
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-1312
STEP: Creating statefulset with conflicting port in namespace statefulset-1312
STEP: Waiting until pod test-pod will start running in namespace statefulset-1312
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-1312
Jan 21 00:32:14.193: INFO: Observed stateful pod in namespace: statefulset-1312, name: ss-0, uid: 5a6bda94-9405-4e54-9a02-608283ecc675, status phase: Pending. Waiting for statefulset controller to delete.
Jan 21 00:32:22.310: INFO: Observed stateful pod in namespace: statefulset-1312, name: ss-0, uid: 5a6bda94-9405-4e54-9a02-608283ecc675, status phase: Failed. Waiting for statefulset controller to delete.
Jan 21 00:32:22.419: INFO: Observed stateful pod in namespace: statefulset-1312, name: ss-0, uid: 5a6bda94-9405-4e54-9a02-608283ecc675, status phase: Failed. Waiting for statefulset controller to delete.
Jan 21 00:32:22.428: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-1312
STEP: Removing pod with conflicting port in namespace statefulset-1312
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-1312 and will be in running state
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 21 00:32:32.664: INFO: Deleting all statefulset in ns statefulset-1312
Jan 21 00:32:32.673: INFO: Scaling statefulset ss to 0
Jan 21 00:32:42.754: INFO: Waiting for statefulset status.replicas updated to 0
Jan 21 00:32:42.757: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:32:42.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-1312" for this suite.

• [SLOW TEST:39.040 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    Should recreate evicted statefulset [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":105,"skipped":1988,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:32:42.802: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-dca82620-b5d1-443a-8187-40823eecc86b
STEP: Creating a pod to test consume configMaps
Jan 21 00:32:42.973: INFO: Waiting up to 5m0s for pod "pod-configmaps-7bddff34-edd2-4831-b0b2-d4eac4b83c94" in namespace "configmap-9329" to be "success or failure"
Jan 21 00:32:42.981: INFO: Pod "pod-configmaps-7bddff34-edd2-4831-b0b2-d4eac4b83c94": Phase="Pending", Reason="", readiness=false. Elapsed: 7.458658ms
Jan 21 00:32:44.989: INFO: Pod "pod-configmaps-7bddff34-edd2-4831-b0b2-d4eac4b83c94": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015868016s
Jan 21 00:32:46.998: INFO: Pod "pod-configmaps-7bddff34-edd2-4831-b0b2-d4eac4b83c94": Phase="Pending", Reason="", readiness=false. Elapsed: 4.024796122s
Jan 21 00:32:49.073: INFO: Pod "pod-configmaps-7bddff34-edd2-4831-b0b2-d4eac4b83c94": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100276139s
Jan 21 00:32:51.097: INFO: Pod "pod-configmaps-7bddff34-edd2-4831-b0b2-d4eac4b83c94": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123803045s
Jan 21 00:32:53.103: INFO: Pod "pod-configmaps-7bddff34-edd2-4831-b0b2-d4eac4b83c94": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.129574969s
STEP: Saw pod success
Jan 21 00:32:53.103: INFO: Pod "pod-configmaps-7bddff34-edd2-4831-b0b2-d4eac4b83c94" satisfied condition "success or failure"
Jan 21 00:32:53.106: INFO: Trying to get logs from node jerma-node pod pod-configmaps-7bddff34-edd2-4831-b0b2-d4eac4b83c94 container configmap-volume-test: 
STEP: delete the pod
Jan 21 00:32:53.142: INFO: Waiting for pod pod-configmaps-7bddff34-edd2-4831-b0b2-d4eac4b83c94 to disappear
Jan 21 00:32:53.158: INFO: Pod pod-configmaps-7bddff34-edd2-4831-b0b2-d4eac4b83c94 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:32:53.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9329" for this suite.

• [SLOW TEST:10.377 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":2032,"failed":0}
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:32:53.179: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-6e1d62f3-d847-4a5a-8040-0845e6462b37
STEP: Creating a pod to test consume configMaps
Jan 21 00:32:53.324: INFO: Waiting up to 5m0s for pod "pod-configmaps-b3d92713-51ff-4da9-816c-71cf520c95bd" in namespace "configmap-6120" to be "success or failure"
Jan 21 00:32:53.347: INFO: Pod "pod-configmaps-b3d92713-51ff-4da9-816c-71cf520c95bd": Phase="Pending", Reason="", readiness=false. Elapsed: 22.576651ms
Jan 21 00:32:55.355: INFO: Pod "pod-configmaps-b3d92713-51ff-4da9-816c-71cf520c95bd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031116455s
Jan 21 00:32:57.370: INFO: Pod "pod-configmaps-b3d92713-51ff-4da9-816c-71cf520c95bd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045620332s
Jan 21 00:32:59.377: INFO: Pod "pod-configmaps-b3d92713-51ff-4da9-816c-71cf520c95bd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053046592s
Jan 21 00:33:01.389: INFO: Pod "pod-configmaps-b3d92713-51ff-4da9-816c-71cf520c95bd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064245363s
STEP: Saw pod success
Jan 21 00:33:01.389: INFO: Pod "pod-configmaps-b3d92713-51ff-4da9-816c-71cf520c95bd" satisfied condition "success or failure"
Jan 21 00:33:01.394: INFO: Trying to get logs from node jerma-node pod pod-configmaps-b3d92713-51ff-4da9-816c-71cf520c95bd container configmap-volume-test: 
STEP: delete the pod
Jan 21 00:33:01.434: INFO: Waiting for pod pod-configmaps-b3d92713-51ff-4da9-816c-71cf520c95bd to disappear
Jan 21 00:33:01.438: INFO: Pod pod-configmaps-b3d92713-51ff-4da9-816c-71cf520c95bd no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:33:01.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6120" for this suite.

• [SLOW TEST:8.302 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":2032,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:33:01.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:33:01.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-3211" for this suite.
•{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":108,"skipped":2050,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:33:01.827: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:33:10.055: INFO: Waiting up to 5m0s for pod "client-envvars-b7bc9976-0993-4053-a011-c8a7623338de" in namespace "pods-5734" to be "success or failure"
Jan 21 00:33:10.073: INFO: Pod "client-envvars-b7bc9976-0993-4053-a011-c8a7623338de": Phase="Pending", Reason="", readiness=false. Elapsed: 17.135909ms
Jan 21 00:33:12.078: INFO: Pod "client-envvars-b7bc9976-0993-4053-a011-c8a7623338de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022466058s
Jan 21 00:33:14.086: INFO: Pod "client-envvars-b7bc9976-0993-4053-a011-c8a7623338de": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030975264s
Jan 21 00:33:16.094: INFO: Pod "client-envvars-b7bc9976-0993-4053-a011-c8a7623338de": Phase="Pending", Reason="", readiness=false. Elapsed: 6.038980076s
Jan 21 00:33:18.104: INFO: Pod "client-envvars-b7bc9976-0993-4053-a011-c8a7623338de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04848533s
STEP: Saw pod success
Jan 21 00:33:18.104: INFO: Pod "client-envvars-b7bc9976-0993-4053-a011-c8a7623338de" satisfied condition "success or failure"
Jan 21 00:33:18.106: INFO: Trying to get logs from node jerma-node pod client-envvars-b7bc9976-0993-4053-a011-c8a7623338de container env3cont: 
STEP: delete the pod
Jan 21 00:33:18.299: INFO: Waiting for pod client-envvars-b7bc9976-0993-4053-a011-c8a7623338de to disappear
Jan 21 00:33:18.331: INFO: Pod client-envvars-b7bc9976-0993-4053-a011-c8a7623338de no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:33:18.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5734" for this suite.

• [SLOW TEST:16.557 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":2059,"failed":0}
SSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:33:18.385: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:33:18.471: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Jan 21 00:33:18.529: INFO: Pod name sample-pod: Found 0 pods out of 1
Jan 21 00:33:23.554: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 21 00:33:27.578: INFO: Creating deployment "test-rolling-update-deployment"
Jan 21 00:33:27.583: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Jan 21 00:33:28.488: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Jan 21 00:33:30.639: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Jan 21 00:33:30.676: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163608, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163608, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163608, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163608, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:33:32.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163608, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163608, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163608, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163608, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:33:34.691: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163608, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163608, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163608, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163608, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:33:36.686: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:67
Jan 21 00:33:36.700: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-5396 /apis/apps/v1/namespaces/deployment-5396/deployments/test-rolling-update-deployment a9901acf-14c1-4ed9-ad57-6b79c34d0920 3296882 1 2020-01-21 00:33:27 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000dcd998  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-21 00:33:28 +0000 UTC,LastTransitionTime:2020-01-21 00:33:28 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-01-21 00:33:34 +0000 UTC,LastTransitionTime:2020-01-21 00:33:28 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 21 00:33:36.706: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444  deployment-5396 /apis/apps/v1/namespaces/deployment-5396/replicasets/test-rolling-update-deployment-67cf4f6444 d096d7ac-c8a6-409e-b90e-16abd7caa2d9 3296868 1 2020-01-21 00:33:28 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment a9901acf-14c1-4ed9-ad57-6b79c34d0920 0xc001092307 0xc001092308}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001092378  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 21 00:33:36.706: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Jan 21 00:33:36.707: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-5396 /apis/apps/v1/namespaces/deployment-5396/replicasets/test-rolling-update-controller 2eed7030-bb4a-4e2f-b03a-3373ce59c442 3296880 2 2020-01-21 00:33:18 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment a9901acf-14c1-4ed9-ad57-6b79c34d0920 0xc001092237 0xc001092238}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001092298  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 21 00:33:36.712: INFO: Pod "test-rolling-update-deployment-67cf4f6444-sb4tc" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-sb4tc test-rolling-update-deployment-67cf4f6444- deployment-5396 /api/v1/namespaces/deployment-5396/pods/test-rolling-update-deployment-67cf4f6444-sb4tc 8bcf11a0-382a-43e9-8b5f-365f1284f16f 3296867 0 2020-01-21 00:33:28 +0000 UTC   map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 d096d7ac-c8a6-409e-b90e-16abd7caa2d9 0xc0010927c7 0xc0010927c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-hgsdm,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-hgsdm,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-hgsdm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 00:33:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 00:33:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 00:33:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 00:33:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-21 00:33:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-21 00:33:34 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://a96c62b6a88da368ce7e7b79fe12ba65210ae9a5f88be9e7a73f8eda97d5d7e2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:33:36.712: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-5396" for this suite.

• [SLOW TEST:18.339 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":110,"skipped":2063,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:33:36.726: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 21 00:33:37.838: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 21 00:33:39.870: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163617, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163617, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163617, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163617, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:33:41.899: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163617, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163617, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163617, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163617, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:33:43.890: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163617, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163617, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163617, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163617, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:33:45.880: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163617, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163617, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163617, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715163617, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 21 00:33:48.951: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API
STEP: Creating a dummy validating-webhook-configuration object
STEP: Deleting the validating-webhook-configuration, which should be possible to remove
STEP: Creating a dummy mutating-webhook-configuration object
STEP: Deleting the mutating-webhook-configuration, which should be possible to remove
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:33:49.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6691" for this suite.
STEP: Destroying namespace "webhook-6691-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.710 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":111,"skipped":2078,"failed":0}
S
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:33:49.436: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-bfc0948d-100c-4a79-92a7-959f6c9e2405 in namespace container-probe-112
Jan 21 00:33:59.696: INFO: Started pod liveness-bfc0948d-100c-4a79-92a7-959f6c9e2405 in namespace container-probe-112
STEP: checking the pod's current state and verifying that restartCount is present
Jan 21 00:33:59.711: INFO: Initial restart count of pod liveness-bfc0948d-100c-4a79-92a7-959f6c9e2405 is 0
Jan 21 00:34:21.824: INFO: Restart count of pod container-probe-112/liveness-bfc0948d-100c-4a79-92a7-959f6c9e2405 is now 1 (22.112695422s elapsed)
Jan 21 00:34:41.931: INFO: Restart count of pod container-probe-112/liveness-bfc0948d-100c-4a79-92a7-959f6c9e2405 is now 2 (42.2192702s elapsed)
Jan 21 00:35:02.003: INFO: Restart count of pod container-probe-112/liveness-bfc0948d-100c-4a79-92a7-959f6c9e2405 is now 3 (1m2.291123788s elapsed)
Jan 21 00:35:22.104: INFO: Restart count of pod container-probe-112/liveness-bfc0948d-100c-4a79-92a7-959f6c9e2405 is now 4 (1m22.393022083s elapsed)
Jan 21 00:36:22.482: INFO: Restart count of pod container-probe-112/liveness-bfc0948d-100c-4a79-92a7-959f6c9e2405 is now 5 (2m22.771014167s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:36:22.523: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-112" for this suite.

• [SLOW TEST:153.133 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":2079,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:36:22.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 21 00:36:22.884: INFO: Waiting up to 5m0s for pod "pod-449a9262-abe5-43a8-aa02-799b82e9dd39" in namespace "emptydir-9484" to be "success or failure"
Jan 21 00:36:22.893: INFO: Pod "pod-449a9262-abe5-43a8-aa02-799b82e9dd39": Phase="Pending", Reason="", readiness=false. Elapsed: 8.703217ms
Jan 21 00:36:25.058: INFO: Pod "pod-449a9262-abe5-43a8-aa02-799b82e9dd39": Phase="Pending", Reason="", readiness=false. Elapsed: 2.173549963s
Jan 21 00:36:27.073: INFO: Pod "pod-449a9262-abe5-43a8-aa02-799b82e9dd39": Phase="Pending", Reason="", readiness=false. Elapsed: 4.188571564s
Jan 21 00:36:29.089: INFO: Pod "pod-449a9262-abe5-43a8-aa02-799b82e9dd39": Phase="Pending", Reason="", readiness=false. Elapsed: 6.204840537s
Jan 21 00:36:31.097: INFO: Pod "pod-449a9262-abe5-43a8-aa02-799b82e9dd39": Phase="Pending", Reason="", readiness=false. Elapsed: 8.212130971s
Jan 21 00:36:33.109: INFO: Pod "pod-449a9262-abe5-43a8-aa02-799b82e9dd39": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.224255724s
STEP: Saw pod success
Jan 21 00:36:33.109: INFO: Pod "pod-449a9262-abe5-43a8-aa02-799b82e9dd39" satisfied condition "success or failure"
Jan 21 00:36:33.114: INFO: Trying to get logs from node jerma-node pod pod-449a9262-abe5-43a8-aa02-799b82e9dd39 container test-container: 
STEP: delete the pod
Jan 21 00:36:33.174: INFO: Waiting for pod pod-449a9262-abe5-43a8-aa02-799b82e9dd39 to disappear
Jan 21 00:36:33.230: INFO: Pod pod-449a9262-abe5-43a8-aa02-799b82e9dd39 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:36:33.231: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9484" for this suite.

• [SLOW TEST:10.778 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":2091,"failed":0}
SSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:36:33.350: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:36:33.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Jan 21 00:36:37.629: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8763 create -f -'
Jan 21 00:36:40.336: INFO: stderr: ""
Jan 21 00:36:40.336: INFO: stdout: "e2e-test-crd-publish-openapi-1415-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan 21 00:36:40.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8763 delete e2e-test-crd-publish-openapi-1415-crds test-foo'
Jan 21 00:36:40.530: INFO: stderr: ""
Jan 21 00:36:40.530: INFO: stdout: "e2e-test-crd-publish-openapi-1415-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Jan 21 00:36:40.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8763 apply -f -'
Jan 21 00:36:40.917: INFO: stderr: ""
Jan 21 00:36:40.917: INFO: stdout: "e2e-test-crd-publish-openapi-1415-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Jan 21 00:36:40.917: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8763 delete e2e-test-crd-publish-openapi-1415-crds test-foo'
Jan 21 00:36:41.120: INFO: stderr: ""
Jan 21 00:36:41.120: INFO: stdout: "e2e-test-crd-publish-openapi-1415-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Jan 21 00:36:41.120: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8763 create -f -'
Jan 21 00:36:41.467: INFO: rc: 1
Jan 21 00:36:41.468: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8763 apply -f -'
Jan 21 00:36:41.861: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Jan 21 00:36:41.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8763 create -f -'
Jan 21 00:36:42.192: INFO: rc: 1
Jan 21 00:36:42.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-8763 apply -f -'
Jan 21 00:36:42.586: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Jan 21 00:36:42.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1415-crds'
Jan 21 00:36:42.928: INFO: stderr: ""
Jan 21 00:36:42.928: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1415-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Jan 21 00:36:42.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1415-crds.metadata'
Jan 21 00:36:43.314: INFO: stderr: ""
Jan 21 00:36:43.314: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1415-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed. Finalizers may be processed and removed in\n     any order. Order is NOT enforced because it introduces significant risk of\n     stuck finalizers. finalizers is a shared field, any actor with permission\n     can reorder it. If the finalizer list is processed in order, then this can\n     lead to a situation in which the component responsible for the first\n     finalizer in the list is waiting for a signal (field value, external\n     system, or other) produced by a component responsible for a finalizer later\n     in the list, resulting in a deadlock. Without enforced ordering finalizers\n     are free to order amongst themselves and are not vulnerable to ordering\n     changes in the list.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Jan 21 00:36:43.316: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1415-crds.spec'
Jan 21 00:36:43.670: INFO: stderr: ""
Jan 21 00:36:43.670: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1415-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Jan 21 00:36:43.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1415-crds.spec.bars'
Jan 21 00:36:44.145: INFO: stderr: ""
Jan 21 00:36:44.146: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1415-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Jan 21 00:36:44.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1415-crds.spec.bars2'
Jan 21 00:36:44.433: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:36:46.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8763" for this suite.

• [SLOW TEST:13.435 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":114,"skipped":2094,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:36:46.786: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-d89c5616-6b8f-456f-98c1-2a9089a5d24c
STEP: Creating a pod to test consume secrets
Jan 21 00:36:46.934: INFO: Waiting up to 5m0s for pod "pod-secrets-5402c1c8-dbc3-439d-8ac4-2fb33afcb958" in namespace "secrets-542" to be "success or failure"
Jan 21 00:36:46.943: INFO: Pod "pod-secrets-5402c1c8-dbc3-439d-8ac4-2fb33afcb958": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122396ms
Jan 21 00:36:48.964: INFO: Pod "pod-secrets-5402c1c8-dbc3-439d-8ac4-2fb33afcb958": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029133212s
Jan 21 00:36:50.973: INFO: Pod "pod-secrets-5402c1c8-dbc3-439d-8ac4-2fb33afcb958": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038420097s
Jan 21 00:36:52.979: INFO: Pod "pod-secrets-5402c1c8-dbc3-439d-8ac4-2fb33afcb958": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044001082s
Jan 21 00:36:54.986: INFO: Pod "pod-secrets-5402c1c8-dbc3-439d-8ac4-2fb33afcb958": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051045737s
STEP: Saw pod success
Jan 21 00:36:54.986: INFO: Pod "pod-secrets-5402c1c8-dbc3-439d-8ac4-2fb33afcb958" satisfied condition "success or failure"
Jan 21 00:36:54.989: INFO: Trying to get logs from node jerma-node pod pod-secrets-5402c1c8-dbc3-439d-8ac4-2fb33afcb958 container secret-env-test: 
STEP: delete the pod
Jan 21 00:36:55.085: INFO: Waiting for pod pod-secrets-5402c1c8-dbc3-439d-8ac4-2fb33afcb958 to disappear
Jan 21 00:36:55.094: INFO: Pod pod-secrets-5402c1c8-dbc3-439d-8ac4-2fb33afcb958 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:36:55.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-542" for this suite.

• [SLOW TEST:8.328 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":2101,"failed":0}
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:36:55.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: getting the auto-created API token
Jan 21 00:36:56.513: INFO: created pod pod-service-account-defaultsa
Jan 21 00:36:56.513: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Jan 21 00:36:56.590: INFO: created pod pod-service-account-mountsa
Jan 21 00:36:56.590: INFO: pod pod-service-account-mountsa service account token volume mount: true
Jan 21 00:36:56.612: INFO: created pod pod-service-account-nomountsa
Jan 21 00:36:56.612: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Jan 21 00:36:56.676: INFO: created pod pod-service-account-defaultsa-mountspec
Jan 21 00:36:56.676: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Jan 21 00:36:56.682: INFO: created pod pod-service-account-mountsa-mountspec
Jan 21 00:36:56.682: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Jan 21 00:36:56.893: INFO: created pod pod-service-account-nomountsa-mountspec
Jan 21 00:36:56.893: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Jan 21 00:36:56.923: INFO: created pod pod-service-account-defaultsa-nomountspec
Jan 21 00:36:56.923: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Jan 21 00:36:57.055: INFO: created pod pod-service-account-mountsa-nomountspec
Jan 21 00:36:57.055: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Jan 21 00:36:57.097: INFO: created pod pod-service-account-nomountsa-nomountspec
Jan 21 00:36:57.097: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:36:57.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-9257" for this suite.
•{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount  [Conformance]","total":278,"completed":116,"skipped":2104,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:36:57.315: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:37:14.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-104" for this suite.

• [SLOW TEST:18.295 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":117,"skipped":2110,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:37:15.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 21 00:37:21.656: INFO: Number of nodes with available pods: 0
Jan 21 00:37:21.656: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:37:23.487: INFO: Number of nodes with available pods: 0
Jan 21 00:37:23.487: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:37:23.707: INFO: Number of nodes with available pods: 0
Jan 21 00:37:23.707: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:37:24.691: INFO: Number of nodes with available pods: 0
Jan 21 00:37:24.692: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:37:25.683: INFO: Number of nodes with available pods: 0
Jan 21 00:37:25.683: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:37:28.877: INFO: Number of nodes with available pods: 0
Jan 21 00:37:28.877: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:37:29.672: INFO: Number of nodes with available pods: 0
Jan 21 00:37:29.673: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:37:30.698: INFO: Number of nodes with available pods: 0
Jan 21 00:37:30.699: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:37:31.676: INFO: Number of nodes with available pods: 1
Jan 21 00:37:31.676: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:37:32.685: INFO: Number of nodes with available pods: 1
Jan 21 00:37:32.685: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:37:33.699: INFO: Number of nodes with available pods: 1
Jan 21 00:37:33.699: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:37:34.671: INFO: Number of nodes with available pods: 1
Jan 21 00:37:34.671: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:37:35.719: INFO: Number of nodes with available pods: 2
Jan 21 00:37:35.720: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Jan 21 00:37:35.785: INFO: Number of nodes with available pods: 2
Jan 21 00:37:35.785: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5851, will wait for the garbage collector to delete the pods
Jan 21 00:37:37.427: INFO: Deleting DaemonSet.extensions daemon-set took: 48.106208ms
Jan 21 00:37:38.429: INFO: Terminating DaemonSet.extensions daemon-set pods took: 1.001640954s
Jan 21 00:37:52.438: INFO: Number of nodes with available pods: 0
Jan 21 00:37:52.438: INFO: Number of running nodes: 0, number of available pods: 0
Jan 21 00:37:52.443: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5851/daemonsets","resourceVersion":"3297833"},"items":null}

Jan 21 00:37:52.448: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5851/pods","resourceVersion":"3297833"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:37:52.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5851" for this suite.

• [SLOW TEST:36.863 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":118,"skipped":2123,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:37:52.477: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:37:52.599: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:37:53.666: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-6254" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works  [Conformance]","total":278,"completed":119,"skipped":2127,"failed":0}

------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:37:53.689: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W0121 00:38:08.371908       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 21 00:38:08.372: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:38:08.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7981" for this suite.

• [SLOW TEST:14.698 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":120,"skipped":2127,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:38:08.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:331
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Jan 21 00:38:14.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5415'
Jan 21 00:38:14.847: INFO: stderr: ""
Jan 21 00:38:14.847: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 21 00:38:14.848: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5415'
Jan 21 00:38:15.948: INFO: stderr: ""
Jan 21 00:38:15.948: INFO: stdout: "update-demo-nautilus-9jvw4 update-demo-nautilus-jbt5h "
Jan 21 00:38:15.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jvw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5415'
Jan 21 00:38:16.307: INFO: stderr: ""
Jan 21 00:38:16.307: INFO: stdout: ""
Jan 21 00:38:16.307: INFO: update-demo-nautilus-9jvw4 is created but not running
Jan 21 00:38:21.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5415'
Jan 21 00:38:21.643: INFO: stderr: ""
Jan 21 00:38:21.643: INFO: stdout: "update-demo-nautilus-9jvw4 update-demo-nautilus-jbt5h "
Jan 21 00:38:21.644: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jvw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5415'
Jan 21 00:38:21.808: INFO: stderr: ""
Jan 21 00:38:21.809: INFO: stdout: ""
Jan 21 00:38:21.809: INFO: update-demo-nautilus-9jvw4 is created but not running
Jan 21 00:38:26.809: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5415'
Jan 21 00:38:27.088: INFO: stderr: ""
Jan 21 00:38:27.088: INFO: stdout: "update-demo-nautilus-9jvw4 update-demo-nautilus-jbt5h "
Jan 21 00:38:27.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jvw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5415'
Jan 21 00:38:27.383: INFO: stderr: ""
Jan 21 00:38:27.383: INFO: stdout: ""
Jan 21 00:38:27.383: INFO: update-demo-nautilus-9jvw4 is created but not running
Jan 21 00:38:32.384: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5415'
Jan 21 00:38:32.678: INFO: stderr: ""
Jan 21 00:38:32.678: INFO: stdout: "update-demo-nautilus-9jvw4 update-demo-nautilus-jbt5h "
Jan 21 00:38:32.679: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jvw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5415'
Jan 21 00:38:32.783: INFO: stderr: ""
Jan 21 00:38:32.784: INFO: stdout: "true"
Jan 21 00:38:32.784: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jvw4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5415'
Jan 21 00:38:32.900: INFO: stderr: ""
Jan 21 00:38:32.900: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 21 00:38:32.900: INFO: validating pod update-demo-nautilus-9jvw4
Jan 21 00:38:32.908: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 21 00:38:32.908: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 21 00:38:32.908: INFO: update-demo-nautilus-9jvw4 is verified up and running
Jan 21 00:38:32.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jbt5h -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5415'
Jan 21 00:38:33.026: INFO: stderr: ""
Jan 21 00:38:33.027: INFO: stdout: "true"
Jan 21 00:38:33.027: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jbt5h -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5415'
Jan 21 00:38:33.141: INFO: stderr: ""
Jan 21 00:38:33.141: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 21 00:38:33.141: INFO: validating pod update-demo-nautilus-jbt5h
Jan 21 00:38:33.147: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 21 00:38:33.147: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 21 00:38:33.147: INFO: update-demo-nautilus-jbt5h is verified up and running
STEP: scaling down the replication controller
Jan 21 00:38:33.150: INFO: scanned /root for discovery docs: 
Jan 21 00:38:33.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-5415'
Jan 21 00:38:34.274: INFO: stderr: ""
Jan 21 00:38:34.274: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 21 00:38:34.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5415'
Jan 21 00:38:34.446: INFO: stderr: ""
Jan 21 00:38:34.446: INFO: stdout: "update-demo-nautilus-9jvw4 update-demo-nautilus-jbt5h "
STEP: Replicas for name=update-demo: expected=1 actual=2
Jan 21 00:38:39.447: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5415'
Jan 21 00:38:39.619: INFO: stderr: ""
Jan 21 00:38:39.619: INFO: stdout: "update-demo-nautilus-9jvw4 "
Jan 21 00:38:39.620: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jvw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5415'
Jan 21 00:38:39.763: INFO: stderr: ""
Jan 21 00:38:39.763: INFO: stdout: "true"
Jan 21 00:38:39.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jvw4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5415'
Jan 21 00:38:39.935: INFO: stderr: ""
Jan 21 00:38:39.936: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 21 00:38:39.936: INFO: validating pod update-demo-nautilus-9jvw4
Jan 21 00:38:39.943: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 21 00:38:39.943: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 21 00:38:39.943: INFO: update-demo-nautilus-9jvw4 is verified up and running
STEP: scaling up the replication controller
Jan 21 00:38:39.948: INFO: scanned /root for discovery docs: 
Jan 21 00:38:39.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-5415'
Jan 21 00:38:41.136: INFO: stderr: ""
Jan 21 00:38:41.136: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 21 00:38:41.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5415'
Jan 21 00:38:41.348: INFO: stderr: ""
Jan 21 00:38:41.348: INFO: stdout: "update-demo-nautilus-9jvw4 update-demo-nautilus-vxqjq "
Jan 21 00:38:41.348: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jvw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5415'
Jan 21 00:38:41.441: INFO: stderr: ""
Jan 21 00:38:41.442: INFO: stdout: "true"
Jan 21 00:38:41.442: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jvw4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5415'
Jan 21 00:38:41.544: INFO: stderr: ""
Jan 21 00:38:41.544: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 21 00:38:41.544: INFO: validating pod update-demo-nautilus-9jvw4
Jan 21 00:38:41.549: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 21 00:38:41.549: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 21 00:38:41.549: INFO: update-demo-nautilus-9jvw4 is verified up and running
Jan 21 00:38:41.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vxqjq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5415'
Jan 21 00:38:41.665: INFO: stderr: ""
Jan 21 00:38:41.665: INFO: stdout: ""
Jan 21 00:38:41.665: INFO: update-demo-nautilus-vxqjq is created but not running
Jan 21 00:38:46.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-5415'
Jan 21 00:38:46.806: INFO: stderr: ""
Jan 21 00:38:46.807: INFO: stdout: "update-demo-nautilus-9jvw4 update-demo-nautilus-vxqjq "
Jan 21 00:38:46.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jvw4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5415'
Jan 21 00:38:46.949: INFO: stderr: ""
Jan 21 00:38:46.950: INFO: stdout: "true"
Jan 21 00:38:46.950: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-9jvw4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5415'
Jan 21 00:38:47.077: INFO: stderr: ""
Jan 21 00:38:47.077: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 21 00:38:47.077: INFO: validating pod update-demo-nautilus-9jvw4
Jan 21 00:38:47.084: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 21 00:38:47.084: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 21 00:38:47.084: INFO: update-demo-nautilus-9jvw4 is verified up and running
Jan 21 00:38:47.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vxqjq -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5415'
Jan 21 00:38:47.370: INFO: stderr: ""
Jan 21 00:38:47.371: INFO: stdout: "true"
Jan 21 00:38:47.371: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-vxqjq -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5415'
Jan 21 00:38:47.532: INFO: stderr: ""
Jan 21 00:38:47.533: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 21 00:38:47.533: INFO: validating pod update-demo-nautilus-vxqjq
Jan 21 00:38:47.541: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 21 00:38:47.541: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 21 00:38:47.541: INFO: update-demo-nautilus-vxqjq is verified up and running
STEP: using delete to clean up resources
Jan 21 00:38:47.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5415'
Jan 21 00:38:47.709: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 21 00:38:47.709: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 21 00:38:47.709: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-5415'
Jan 21 00:38:47.850: INFO: stderr: "No resources found in kubectl-5415 namespace.\n"
Jan 21 00:38:47.851: INFO: stdout: ""
Jan 21 00:38:47.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-5415 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 21 00:38:48.035: INFO: stderr: ""
Jan 21 00:38:48.035: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:38:48.035: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5415" for this suite.

• [SLOW TEST:39.659 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller  [Conformance]","total":278,"completed":121,"skipped":2132,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:38:48.048: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 21 00:38:48.244: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3b881de6-b264-479d-b15c-2bf2d6650080" in namespace "projected-9243" to be "success or failure"
Jan 21 00:38:48.258: INFO: Pod "downwardapi-volume-3b881de6-b264-479d-b15c-2bf2d6650080": Phase="Pending", Reason="", readiness=false. Elapsed: 13.881074ms
Jan 21 00:38:50.278: INFO: Pod "downwardapi-volume-3b881de6-b264-479d-b15c-2bf2d6650080": Phase="Pending", Reason="", readiness=false. Elapsed: 2.033519083s
Jan 21 00:38:52.286: INFO: Pod "downwardapi-volume-3b881de6-b264-479d-b15c-2bf2d6650080": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041529786s
Jan 21 00:38:54.293: INFO: Pod "downwardapi-volume-3b881de6-b264-479d-b15c-2bf2d6650080": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048790762s
Jan 21 00:38:56.301: INFO: Pod "downwardapi-volume-3b881de6-b264-479d-b15c-2bf2d6650080": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05695527s
Jan 21 00:38:58.310: INFO: Pod "downwardapi-volume-3b881de6-b264-479d-b15c-2bf2d6650080": Phase="Pending", Reason="", readiness=false. Elapsed: 10.065843373s
Jan 21 00:39:00.320: INFO: Pod "downwardapi-volume-3b881de6-b264-479d-b15c-2bf2d6650080": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.076166537s
STEP: Saw pod success
Jan 21 00:39:00.321: INFO: Pod "downwardapi-volume-3b881de6-b264-479d-b15c-2bf2d6650080" satisfied condition "success or failure"
Jan 21 00:39:00.327: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-3b881de6-b264-479d-b15c-2bf2d6650080 container client-container: 
STEP: delete the pod
Jan 21 00:39:00.403: INFO: Waiting for pod downwardapi-volume-3b881de6-b264-479d-b15c-2bf2d6650080 to disappear
Jan 21 00:39:00.408: INFO: Pod downwardapi-volume-3b881de6-b264-479d-b15c-2bf2d6650080 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:39:00.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-9243" for this suite.

• [SLOW TEST:12.379 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2158,"failed":0}
SSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:39:00.429: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating all guestbook components
Jan 21 00:39:00.557: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-slave
  labels:
    app: agnhost
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: agnhost
    role: slave
    tier: backend

Jan 21 00:39:00.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6108'
Jan 21 00:39:01.170: INFO: stderr: ""
Jan 21 00:39:01.170: INFO: stdout: "service/agnhost-slave created\n"
Jan 21 00:39:01.171: INFO: apiVersion: v1
kind: Service
metadata:
  name: agnhost-master
  labels:
    app: agnhost
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: agnhost
    role: master
    tier: backend

Jan 21 00:39:01.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6108'
Jan 21 00:39:01.795: INFO: stderr: ""
Jan 21 00:39:01.795: INFO: stdout: "service/agnhost-master created\n"
Jan 21 00:39:01.798: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Jan 21 00:39:01.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6108'
Jan 21 00:39:02.358: INFO: stderr: ""
Jan 21 00:39:02.358: INFO: stdout: "service/frontend created\n"
Jan 21 00:39:02.359: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: guestbook-frontend
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--backend-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

Jan 21 00:39:02.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6108'
Jan 21 00:39:02.989: INFO: stderr: ""
Jan 21 00:39:02.989: INFO: stdout: "deployment.apps/frontend created\n"
Jan 21 00:39:02.990: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: agnhost
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 21 00:39:02.990: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6108'
Jan 21 00:39:03.562: INFO: stderr: ""
Jan 21 00:39:03.562: INFO: stdout: "deployment.apps/agnhost-master created\n"
Jan 21 00:39:03.563: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: agnhost-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: agnhost
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: agnhost
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8
        args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Jan 21 00:39:03.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6108'
Jan 21 00:39:05.030: INFO: stderr: ""
Jan 21 00:39:05.030: INFO: stdout: "deployment.apps/agnhost-slave created\n"
STEP: validating guestbook app
Jan 21 00:39:05.030: INFO: Waiting for all frontend pods to be Running.
Jan 21 00:39:25.084: INFO: Waiting for frontend to serve content.
Jan 21 00:39:25.117: INFO: Trying to add a new entry to the guestbook.
Jan 21 00:39:25.136: INFO: Verifying that added entry can be retrieved.
Jan 21 00:39:25.154: INFO: Failed to get response from guestbook. err: , response: {"data":""}
STEP: using delete to clean up resources
Jan 21 00:39:30.173: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6108'
Jan 21 00:39:30.438: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 21 00:39:30.439: INFO: stdout: "service \"agnhost-slave\" force deleted\n"
STEP: using delete to clean up resources
Jan 21 00:39:30.440: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6108'
Jan 21 00:39:30.956: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 21 00:39:30.957: INFO: stdout: "service \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 21 00:39:30.957: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6108'
Jan 21 00:39:31.114: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 21 00:39:31.115: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 21 00:39:31.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6108'
Jan 21 00:39:31.288: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 21 00:39:31.289: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Jan 21 00:39:31.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6108'
Jan 21 00:39:31.396: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 21 00:39:31.397: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n"
STEP: using delete to clean up resources
Jan 21 00:39:31.397: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6108'
Jan 21 00:39:31.549: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 21 00:39:31.549: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:39:31.549: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6108" for this suite.

• [SLOW TEST:31.249 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:387
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application  [Conformance]","total":278,"completed":123,"skipped":2164,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:39:31.679: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:39:32.036: INFO: (0) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 46.542399ms)
Jan 21 00:39:33.801: INFO: (1) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 1.765282386s)
Jan 21 00:39:33.816: INFO: (2) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 14.103956ms)
Jan 21 00:39:33.865: INFO: (3) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 49.234873ms)
Jan 21 00:39:34.175: INFO: (4) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 308.62379ms)
Jan 21 00:39:34.199: INFO: (5) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 23.618078ms)
Jan 21 00:39:34.245: INFO: (6) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 45.713509ms)
Jan 21 00:39:34.251: INFO: (7) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.395828ms)
Jan 21 00:39:34.259: INFO: (8) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 7.502923ms)
Jan 21 00:39:34.313: INFO: (9) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 53.424601ms)
Jan 21 00:39:34.319: INFO: (10) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.779812ms)
Jan 21 00:39:34.324: INFO: (11) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.900494ms)
Jan 21 00:39:34.329: INFO: (12) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.356296ms)
Jan 21 00:39:34.335: INFO: (13) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.751549ms)
Jan 21 00:39:34.340: INFO: (14) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.650911ms)
Jan 21 00:39:34.345: INFO: (15) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.610922ms)
Jan 21 00:39:34.350: INFO: (16) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 5.224436ms)
Jan 21 00:39:34.355: INFO: (17) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.317626ms)
Jan 21 00:39:34.359: INFO: (18) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 4.516565ms)
Jan 21 00:39:34.365: INFO: (19) /api/v1/nodes/jerma-server-mvvl6gufaqub:10250/proxy/logs/: 
alternatives.log
apt/
... (200; 6.126291ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:39:34.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-5990" for this suite.
•{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]","total":278,"completed":124,"skipped":2177,"failed":0}
SS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:39:34.377: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-7734
[It] should perform canary updates and phased rolling updates of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a new StatefulSet
Jan 21 00:39:34.734: INFO: Found 0 stateful pods, waiting for 3
Jan 21 00:39:44.742: INFO: Found 1 stateful pods, waiting for 3
Jan 21 00:39:54.743: INFO: Found 2 stateful pods, waiting for 3
Jan 21 00:40:04.747: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 21 00:40:04.747: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 21 00:40:04.747: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Jan 21 00:40:04.780: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Not applying an update when the partition is greater than the number of replicas
STEP: Performing a canary update
Jan 21 00:40:14.923: INFO: Updating stateful set ss2
Jan 21 00:40:14.972: INFO: Waiting for Pod statefulset-7734/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
STEP: Restoring Pods to the correct revision when they are deleted
Jan 21 00:40:26.729: INFO: Found 2 stateful pods, waiting for 3
Jan 21 00:40:36.739: INFO: Found 2 stateful pods, waiting for 3
Jan 21 00:40:46.766: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Jan 21 00:40:46.767: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Jan 21 00:40:46.768: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Performing a phased rolling update
Jan 21 00:40:46.818: INFO: Updating stateful set ss2
Jan 21 00:40:46.917: INFO: Waiting for Pod statefulset-7734/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 21 00:40:56.942: INFO: Waiting for Pod statefulset-7734/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 21 00:41:06.948: INFO: Updating stateful set ss2
Jan 21 00:41:06.982: INFO: Waiting for StatefulSet statefulset-7734/ss2 to complete update
Jan 21 00:41:06.983: INFO: Waiting for Pod statefulset-7734/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 21 00:41:17.007: INFO: Waiting for StatefulSet statefulset-7734/ss2 to complete update
Jan 21 00:41:17.008: INFO: Waiting for Pod statefulset-7734/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Jan 21 00:41:26.998: INFO: Waiting for StatefulSet statefulset-7734/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 21 00:41:36.995: INFO: Deleting all statefulset in ns statefulset-7734
Jan 21 00:41:36.998: INFO: Scaling statefulset ss2 to 0
Jan 21 00:42:07.133: INFO: Waiting for statefulset status.replicas updated to 0
Jan 21 00:42:07.137: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:42:07.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-7734" for this suite.

• [SLOW TEST:152.799 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should perform canary updates and phased rolling updates of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":125,"skipped":2179,"failed":0}
SS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:42:07.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 21 00:42:08.118: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 21 00:42:10.140: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164128, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164128, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164128, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164128, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:42:12.188: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164128, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164128, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164128, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164128, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:42:14.207: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164128, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164128, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164128, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164128, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 21 00:42:17.216: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:42:17.225: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:42:18.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-139" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:11.783 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":126,"skipped":2181,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:42:18.962: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Jan 21 00:42:20.156: INFO: Pod name wrapped-volume-race-68b4593f-a09f-4cf2-bd4a-0c6e3c0b6f9e: Found 0 pods out of 5
Jan 21 00:42:25.166: INFO: Pod name wrapped-volume-race-68b4593f-a09f-4cf2-bd4a-0c6e3c0b6f9e: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-68b4593f-a09f-4cf2-bd4a-0c6e3c0b6f9e in namespace emptydir-wrapper-931, will wait for the garbage collector to delete the pods
Jan 21 00:42:53.255: INFO: Deleting ReplicationController wrapped-volume-race-68b4593f-a09f-4cf2-bd4a-0c6e3c0b6f9e took: 8.804636ms
Jan 21 00:42:53.756: INFO: Terminating ReplicationController wrapped-volume-race-68b4593f-a09f-4cf2-bd4a-0c6e3c0b6f9e pods took: 501.144101ms
STEP: Creating RC which spawns configmap-volume pods
Jan 21 00:43:13.652: INFO: Pod name wrapped-volume-race-4e673690-7fa8-4de3-bf74-3f9e29d2a9f7: Found 0 pods out of 5
Jan 21 00:43:18.663: INFO: Pod name wrapped-volume-race-4e673690-7fa8-4de3-bf74-3f9e29d2a9f7: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-4e673690-7fa8-4de3-bf74-3f9e29d2a9f7 in namespace emptydir-wrapper-931, will wait for the garbage collector to delete the pods
Jan 21 00:43:50.788: INFO: Deleting ReplicationController wrapped-volume-race-4e673690-7fa8-4de3-bf74-3f9e29d2a9f7 took: 30.55725ms
Jan 21 00:43:51.289: INFO: Terminating ReplicationController wrapped-volume-race-4e673690-7fa8-4de3-bf74-3f9e29d2a9f7 pods took: 501.115778ms
STEP: Creating RC which spawns configmap-volume pods
Jan 21 00:44:03.562: INFO: Pod name wrapped-volume-race-a99c5f78-8cb9-4200-841f-d0806131fe58: Found 0 pods out of 5
Jan 21 00:44:08.586: INFO: Pod name wrapped-volume-race-a99c5f78-8cb9-4200-841f-d0806131fe58: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-a99c5f78-8cb9-4200-841f-d0806131fe58 in namespace emptydir-wrapper-931, will wait for the garbage collector to delete the pods
Jan 21 00:44:40.721: INFO: Deleting ReplicationController wrapped-volume-race-a99c5f78-8cb9-4200-841f-d0806131fe58 took: 19.151164ms
Jan 21 00:44:41.221: INFO: Terminating ReplicationController wrapped-volume-race-a99c5f78-8cb9-4200-841f-d0806131fe58 pods took: 500.745233ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:45:04.535: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-931" for this suite.

• [SLOW TEST:165.605 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":127,"skipped":2204,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:45:04.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-bc16dd39-bc06-4693-adff-3fa7d492b323 in namespace container-probe-9919
Jan 21 00:45:12.715: INFO: Started pod busybox-bc16dd39-bc06-4693-adff-3fa7d492b323 in namespace container-probe-9919
STEP: checking the pod's current state and verifying that restartCount is present
Jan 21 00:45:12.735: INFO: Initial restart count of pod busybox-bc16dd39-bc06-4693-adff-3fa7d492b323 is 0
Jan 21 00:46:00.509: INFO: Restart count of pod container-probe-9919/busybox-bc16dd39-bc06-4693-adff-3fa7d492b323 is now 1 (47.773273058s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:46:00.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9919" for this suite.

• [SLOW TEST:56.088 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2209,"failed":0}
S
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:46:00.656: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating server pod server in namespace prestop-1914
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-1914
STEP: Deleting pre-stop pod
Jan 21 00:46:23.844: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:46:23.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-1914" for this suite.

• [SLOW TEST:23.308 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod  [Conformance]","total":278,"completed":129,"skipped":2210,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:46:23.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod with dnsPolicy=None and customized dnsConfig...
Jan 21 00:46:24.096: INFO: Created pod &Pod{ObjectMeta:{dns-6000  dns-6000 /api/v1/namespaces/dns-6000/pods/dns-6000 7cb30809-bdca-4d28-b9c8-da40e9943bf5 3300602 0 2020-01-21 00:46:24 +0000 UTC   map[] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6wmwf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6wmwf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6wmwf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
STEP: Verifying customized DNS suffix list is configured on pod...
Jan 21 00:46:34.116: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-6000 PodName:dns-6000 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 00:46:34.117: INFO: >>> kubeConfig: /root/.kube/config
I0121 00:46:34.180544       8 log.go:172] (0xc002bde4d0) (0xc00133a0a0) Create stream
I0121 00:46:34.180831       8 log.go:172] (0xc002bde4d0) (0xc00133a0a0) Stream added, broadcasting: 1
I0121 00:46:34.192725       8 log.go:172] (0xc002bde4d0) Reply frame received for 1
I0121 00:46:34.192911       8 log.go:172] (0xc002bde4d0) (0xc001732000) Create stream
I0121 00:46:34.192957       8 log.go:172] (0xc002bde4d0) (0xc001732000) Stream added, broadcasting: 3
I0121 00:46:34.197801       8 log.go:172] (0xc002bde4d0) Reply frame received for 3
I0121 00:46:34.197866       8 log.go:172] (0xc002bde4d0) (0xc001c68280) Create stream
I0121 00:46:34.197886       8 log.go:172] (0xc002bde4d0) (0xc001c68280) Stream added, broadcasting: 5
I0121 00:46:34.199904       8 log.go:172] (0xc002bde4d0) Reply frame received for 5
I0121 00:46:34.331566       8 log.go:172] (0xc002bde4d0) Data frame received for 3
I0121 00:46:34.331811       8 log.go:172] (0xc001732000) (3) Data frame handling
I0121 00:46:34.331898       8 log.go:172] (0xc001732000) (3) Data frame sent
I0121 00:46:34.470794       8 log.go:172] (0xc002bde4d0) Data frame received for 1
I0121 00:46:34.471006       8 log.go:172] (0xc002bde4d0) (0xc001732000) Stream removed, broadcasting: 3
I0121 00:46:34.471105       8 log.go:172] (0xc00133a0a0) (1) Data frame handling
I0121 00:46:34.471146       8 log.go:172] (0xc00133a0a0) (1) Data frame sent
I0121 00:46:34.471175       8 log.go:172] (0xc002bde4d0) (0xc001c68280) Stream removed, broadcasting: 5
I0121 00:46:34.471218       8 log.go:172] (0xc002bde4d0) (0xc00133a0a0) Stream removed, broadcasting: 1
I0121 00:46:34.471268       8 log.go:172] (0xc002bde4d0) Go away received
I0121 00:46:34.471597       8 log.go:172] (0xc002bde4d0) (0xc00133a0a0) Stream removed, broadcasting: 1
I0121 00:46:34.471615       8 log.go:172] (0xc002bde4d0) (0xc001732000) Stream removed, broadcasting: 3
I0121 00:46:34.471632       8 log.go:172] (0xc002bde4d0) (0xc001c68280) Stream removed, broadcasting: 5
STEP: Verifying customized DNS server is configured on pod...
Jan 21 00:46:34.471: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-6000 PodName:dns-6000 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 00:46:34.471: INFO: >>> kubeConfig: /root/.kube/config
I0121 00:46:34.528456       8 log.go:172] (0xc00490a4d0) (0xc0017328c0) Create stream
I0121 00:46:34.528833       8 log.go:172] (0xc00490a4d0) (0xc0017328c0) Stream added, broadcasting: 1
I0121 00:46:34.538498       8 log.go:172] (0xc00490a4d0) Reply frame received for 1
I0121 00:46:34.538614       8 log.go:172] (0xc00490a4d0) (0xc001732aa0) Create stream
I0121 00:46:34.538654       8 log.go:172] (0xc00490a4d0) (0xc001732aa0) Stream added, broadcasting: 3
I0121 00:46:34.540243       8 log.go:172] (0xc00490a4d0) Reply frame received for 3
I0121 00:46:34.540290       8 log.go:172] (0xc00490a4d0) (0xc001c68320) Create stream
I0121 00:46:34.540298       8 log.go:172] (0xc00490a4d0) (0xc001c68320) Stream added, broadcasting: 5
I0121 00:46:34.541776       8 log.go:172] (0xc00490a4d0) Reply frame received for 5
I0121 00:46:34.655212       8 log.go:172] (0xc00490a4d0) Data frame received for 3
I0121 00:46:34.655488       8 log.go:172] (0xc001732aa0) (3) Data frame handling
I0121 00:46:34.655546       8 log.go:172] (0xc001732aa0) (3) Data frame sent
I0121 00:46:34.761536       8 log.go:172] (0xc00490a4d0) Data frame received for 1
I0121 00:46:34.761801       8 log.go:172] (0xc00490a4d0) (0xc001732aa0) Stream removed, broadcasting: 3
I0121 00:46:34.761972       8 log.go:172] (0xc0017328c0) (1) Data frame handling
I0121 00:46:34.762079       8 log.go:172] (0xc0017328c0) (1) Data frame sent
I0121 00:46:34.762305       8 log.go:172] (0xc00490a4d0) (0xc001c68320) Stream removed, broadcasting: 5
I0121 00:46:34.762700       8 log.go:172] (0xc00490a4d0) (0xc0017328c0) Stream removed, broadcasting: 1
I0121 00:46:34.762804       8 log.go:172] (0xc00490a4d0) Go away received
I0121 00:46:34.763608       8 log.go:172] (0xc00490a4d0) (0xc0017328c0) Stream removed, broadcasting: 1
I0121 00:46:34.763634       8 log.go:172] (0xc00490a4d0) (0xc001732aa0) Stream removed, broadcasting: 3
I0121 00:46:34.763659       8 log.go:172] (0xc00490a4d0) (0xc001c68320) Stream removed, broadcasting: 5
Jan 21 00:46:34.763: INFO: Deleting pod dns-6000...
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:46:34.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6000" for this suite.

• [SLOW TEST:10.865 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should support configurable pod DNS nameservers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":130,"skipped":2255,"failed":0}
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:46:34.836: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W0121 00:46:46.658036       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 21 00:46:46.658: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:46:46.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5102" for this suite.

• [SLOW TEST:11.861 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":131,"skipped":2255,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:46:46.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on node default medium
Jan 21 00:46:46.867: INFO: Waiting up to 5m0s for pod "pod-77c79965-1d9a-4a06-bbc3-c63008a40f34" in namespace "emptydir-157" to be "success or failure"
Jan 21 00:46:46.876: INFO: Pod "pod-77c79965-1d9a-4a06-bbc3-c63008a40f34": Phase="Pending", Reason="", readiness=false. Elapsed: 7.661298ms
Jan 21 00:46:48.889: INFO: Pod "pod-77c79965-1d9a-4a06-bbc3-c63008a40f34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020902662s
Jan 21 00:46:50.899: INFO: Pod "pod-77c79965-1d9a-4a06-bbc3-c63008a40f34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030789323s
Jan 21 00:46:52.909: INFO: Pod "pod-77c79965-1d9a-4a06-bbc3-c63008a40f34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041278122s
Jan 21 00:46:54.931: INFO: Pod "pod-77c79965-1d9a-4a06-bbc3-c63008a40f34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.063147142s
STEP: Saw pod success
Jan 21 00:46:54.932: INFO: Pod "pod-77c79965-1d9a-4a06-bbc3-c63008a40f34" satisfied condition "success or failure"
Jan 21 00:46:54.937: INFO: Trying to get logs from node jerma-node pod pod-77c79965-1d9a-4a06-bbc3-c63008a40f34 container test-container: 
STEP: delete the pod
Jan 21 00:46:55.079: INFO: Waiting for pod pod-77c79965-1d9a-4a06-bbc3-c63008a40f34 to disappear
Jan 21 00:46:55.084: INFO: Pod pod-77c79965-1d9a-4a06-bbc3-c63008a40f34 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:46:55.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-157" for this suite.

• [SLOW TEST:8.398 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2308,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:46:55.101: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:47:05.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8526" for this suite.

• [SLOW TEST:10.216 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":133,"skipped":2338,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:47:05.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:47:05.448: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:47:09.963: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-175" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works  [Conformance]","total":278,"completed":134,"skipped":2369,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:47:09.972: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79
STEP: Creating service test in namespace statefulset-9936
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating statefulset ss in namespace statefulset-9936
Jan 21 00:47:10.145: INFO: Found 0 stateful pods, waiting for 1
Jan 21 00:47:20.154: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90
Jan 21 00:47:20.189: INFO: Deleting all statefulset in ns statefulset-9936
Jan 21 00:47:20.312: INFO: Scaling statefulset ss to 0
Jan 21 00:47:30.490: INFO: Waiting for statefulset status.replicas updated to 0
Jan 21 00:47:30.496: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:47:30.517: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-9936" for this suite.

• [SLOW TEST:20.577 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":135,"skipped":2385,"failed":0}
S
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:47:30.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 21 00:47:31.462: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 21 00:47:33.480: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164451, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164451, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164451, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164451, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:47:35.493: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164451, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164451, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164451, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164451, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:47:37.489: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164451, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164451, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164451, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164451, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 21 00:47:40.529: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:47:41.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2962" for this suite.
STEP: Destroying namespace "webhook-2962-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:11.125 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":136,"skipped":2386,"failed":0}
S
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:47:41.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test substitution in container's command
Jan 21 00:47:41.910: INFO: Waiting up to 5m0s for pod "var-expansion-9e654ecb-1960-4063-a395-1ffefa2d9272" in namespace "var-expansion-3660" to be "success or failure"
Jan 21 00:47:41.995: INFO: Pod "var-expansion-9e654ecb-1960-4063-a395-1ffefa2d9272": Phase="Pending", Reason="", readiness=false. Elapsed: 84.324587ms
Jan 21 00:47:44.003: INFO: Pod "var-expansion-9e654ecb-1960-4063-a395-1ffefa2d9272": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092421082s
Jan 21 00:47:46.013: INFO: Pod "var-expansion-9e654ecb-1960-4063-a395-1ffefa2d9272": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102537712s
Jan 21 00:47:48.023: INFO: Pod "var-expansion-9e654ecb-1960-4063-a395-1ffefa2d9272": Phase="Pending", Reason="", readiness=false. Elapsed: 6.112514597s
Jan 21 00:47:50.033: INFO: Pod "var-expansion-9e654ecb-1960-4063-a395-1ffefa2d9272": Phase="Pending", Reason="", readiness=false. Elapsed: 8.122332686s
Jan 21 00:47:52.040: INFO: Pod "var-expansion-9e654ecb-1960-4063-a395-1ffefa2d9272": Phase="Pending", Reason="", readiness=false. Elapsed: 10.129279219s
Jan 21 00:47:54.049: INFO: Pod "var-expansion-9e654ecb-1960-4063-a395-1ffefa2d9272": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.138355094s
STEP: Saw pod success
Jan 21 00:47:54.049: INFO: Pod "var-expansion-9e654ecb-1960-4063-a395-1ffefa2d9272" satisfied condition "success or failure"
Jan 21 00:47:54.053: INFO: Trying to get logs from node jerma-node pod var-expansion-9e654ecb-1960-4063-a395-1ffefa2d9272 container dapi-container: 
STEP: delete the pod
Jan 21 00:47:55.507: INFO: Waiting for pod var-expansion-9e654ecb-1960-4063-a395-1ffefa2d9272 to disappear
Jan 21 00:47:55.679: INFO: Pod var-expansion-9e654ecb-1960-4063-a395-1ffefa2d9272 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:47:55.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3660" for this suite.

• [SLOW TEST:14.033 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2387,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:47:55.713: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 21 00:47:56.859: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 21 00:47:58.889: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164476, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164476, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164476, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164476, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:48:00.924: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164476, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164476, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164476, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164476, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:48:02.903: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164476, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164476, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164476, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164476, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 21 00:48:06.032: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:48:06.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1238" for this suite.
STEP: Destroying namespace "webhook-1238-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.616 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":138,"skipped":2424,"failed":0}
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:48:06.329: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test env composition
Jan 21 00:48:06.435: INFO: Waiting up to 5m0s for pod "var-expansion-29cc7808-3405-4b86-8495-356024d35e67" in namespace "var-expansion-3548" to be "success or failure"
Jan 21 00:48:06.441: INFO: Pod "var-expansion-29cc7808-3405-4b86-8495-356024d35e67": Phase="Pending", Reason="", readiness=false. Elapsed: 5.916826ms
Jan 21 00:48:08.452: INFO: Pod "var-expansion-29cc7808-3405-4b86-8495-356024d35e67": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016741983s
Jan 21 00:48:10.464: INFO: Pod "var-expansion-29cc7808-3405-4b86-8495-356024d35e67": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028285737s
Jan 21 00:48:12.487: INFO: Pod "var-expansion-29cc7808-3405-4b86-8495-356024d35e67": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051750874s
Jan 21 00:48:14.500: INFO: Pod "var-expansion-29cc7808-3405-4b86-8495-356024d35e67": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064742614s
Jan 21 00:48:16.511: INFO: Pod "var-expansion-29cc7808-3405-4b86-8495-356024d35e67": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.075904133s
STEP: Saw pod success
Jan 21 00:48:16.512: INFO: Pod "var-expansion-29cc7808-3405-4b86-8495-356024d35e67" satisfied condition "success or failure"
Jan 21 00:48:16.517: INFO: Trying to get logs from node jerma-node pod var-expansion-29cc7808-3405-4b86-8495-356024d35e67 container dapi-container: 
STEP: delete the pod
Jan 21 00:48:16.683: INFO: Waiting for pod var-expansion-29cc7808-3405-4b86-8495-356024d35e67 to disappear
Jan 21 00:48:16.691: INFO: Pod var-expansion-29cc7808-3405-4b86-8495-356024d35e67 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:48:16.692: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-3548" for this suite.

• [SLOW TEST:10.388 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2424,"failed":0}
S
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:48:16.719: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-configmap-9tsn
STEP: Creating a pod to test atomic-volume-subpath
Jan 21 00:48:16.908: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-9tsn" in namespace "subpath-2414" to be "success or failure"
Jan 21 00:48:16.921: INFO: Pod "pod-subpath-test-configmap-9tsn": Phase="Pending", Reason="", readiness=false. Elapsed: 13.253644ms
Jan 21 00:48:18.932: INFO: Pod "pod-subpath-test-configmap-9tsn": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024099393s
Jan 21 00:48:20.953: INFO: Pod "pod-subpath-test-configmap-9tsn": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045227092s
Jan 21 00:48:22.966: INFO: Pod "pod-subpath-test-configmap-9tsn": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058038897s
Jan 21 00:48:24.976: INFO: Pod "pod-subpath-test-configmap-9tsn": Phase="Running", Reason="", readiness=true. Elapsed: 8.068069451s
Jan 21 00:48:26.985: INFO: Pod "pod-subpath-test-configmap-9tsn": Phase="Running", Reason="", readiness=true. Elapsed: 10.076334526s
Jan 21 00:48:28.991: INFO: Pod "pod-subpath-test-configmap-9tsn": Phase="Running", Reason="", readiness=true. Elapsed: 12.08280781s
Jan 21 00:48:31.014: INFO: Pod "pod-subpath-test-configmap-9tsn": Phase="Running", Reason="", readiness=true. Elapsed: 14.10564041s
Jan 21 00:48:33.028: INFO: Pod "pod-subpath-test-configmap-9tsn": Phase="Running", Reason="", readiness=true. Elapsed: 16.1197134s
Jan 21 00:48:35.034: INFO: Pod "pod-subpath-test-configmap-9tsn": Phase="Running", Reason="", readiness=true. Elapsed: 18.125627225s
Jan 21 00:48:37.040: INFO: Pod "pod-subpath-test-configmap-9tsn": Phase="Running", Reason="", readiness=true. Elapsed: 20.132281806s
Jan 21 00:48:39.049: INFO: Pod "pod-subpath-test-configmap-9tsn": Phase="Running", Reason="", readiness=true. Elapsed: 22.140667911s
Jan 21 00:48:41.058: INFO: Pod "pod-subpath-test-configmap-9tsn": Phase="Running", Reason="", readiness=true. Elapsed: 24.149374005s
Jan 21 00:48:43.082: INFO: Pod "pod-subpath-test-configmap-9tsn": Phase="Running", Reason="", readiness=true. Elapsed: 26.173619165s
Jan 21 00:48:45.095: INFO: Pod "pod-subpath-test-configmap-9tsn": Phase="Running", Reason="", readiness=true. Elapsed: 28.186638085s
Jan 21 00:48:47.102: INFO: Pod "pod-subpath-test-configmap-9tsn": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.193439294s
STEP: Saw pod success
Jan 21 00:48:47.102: INFO: Pod "pod-subpath-test-configmap-9tsn" satisfied condition "success or failure"
Jan 21 00:48:47.107: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-9tsn container test-container-subpath-configmap-9tsn: 
STEP: delete the pod
Jan 21 00:48:47.211: INFO: Waiting for pod pod-subpath-test-configmap-9tsn to disappear
Jan 21 00:48:47.313: INFO: Pod pod-subpath-test-configmap-9tsn no longer exists
STEP: Deleting pod pod-subpath-test-configmap-9tsn
Jan 21 00:48:47.314: INFO: Deleting pod "pod-subpath-test-configmap-9tsn" in namespace "subpath-2414"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:48:47.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2414" for this suite.

• [SLOW TEST:30.660 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":140,"skipped":2425,"failed":0}
SSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:48:47.380: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:49:03.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6765" for this suite.

• [SLOW TEST:16.302 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":141,"skipped":2428,"failed":0}
SSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:49:03.685: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name projected-secret-test-98b6826c-e8a8-4cc9-85ba-9f4ab062b52b
STEP: Creating a pod to test consume secrets
Jan 21 00:49:03.868: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-37974b84-357f-4c31-9419-32abd25c4598" in namespace "projected-8844" to be "success or failure"
Jan 21 00:49:03.934: INFO: Pod "pod-projected-secrets-37974b84-357f-4c31-9419-32abd25c4598": Phase="Pending", Reason="", readiness=false. Elapsed: 65.595621ms
Jan 21 00:49:05.942: INFO: Pod "pod-projected-secrets-37974b84-357f-4c31-9419-32abd25c4598": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073477409s
Jan 21 00:49:07.950: INFO: Pod "pod-projected-secrets-37974b84-357f-4c31-9419-32abd25c4598": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082175446s
Jan 21 00:49:10.012: INFO: Pod "pod-projected-secrets-37974b84-357f-4c31-9419-32abd25c4598": Phase="Pending", Reason="", readiness=false. Elapsed: 6.144147441s
Jan 21 00:49:12.019: INFO: Pod "pod-projected-secrets-37974b84-357f-4c31-9419-32abd25c4598": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.151085556s
STEP: Saw pod success
Jan 21 00:49:12.019: INFO: Pod "pod-projected-secrets-37974b84-357f-4c31-9419-32abd25c4598" satisfied condition "success or failure"
Jan 21 00:49:12.024: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-37974b84-357f-4c31-9419-32abd25c4598 container secret-volume-test: 
STEP: delete the pod
Jan 21 00:49:12.067: INFO: Waiting for pod pod-projected-secrets-37974b84-357f-4c31-9419-32abd25c4598 to disappear
Jan 21 00:49:12.181: INFO: Pod pod-projected-secrets-37974b84-357f-4c31-9419-32abd25c4598 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:49:12.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8844" for this suite.

• [SLOW TEST:8.540 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2442,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:49:12.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W0121 00:49:53.286844       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 21 00:49:53.286: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:49:53.287: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8578" for this suite.

• [SLOW TEST:41.075 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":143,"skipped":2452,"failed":0}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:49:53.302: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 21 00:50:11.395: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:50:11.430: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8074" for this suite.

• [SLOW TEST:18.155 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":144,"skipped":2455,"failed":0}
SSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:50:11.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:50:39.763: INFO: Container started at 2020-01-21 00:50:19 +0000 UTC, pod became ready at 2020-01-21 00:50:37 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:50:39.764: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4671" for this suite.

• [SLOW TEST:28.317 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2460,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:50:39.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 21 00:50:50.592: INFO: Successfully updated pod "labelsupdated80d0f42-5af5-4a1a-8d26-8c97d5853856"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:50:52.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2913" for this suite.

• [SLOW TEST:12.987 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2462,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:50:52.764: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:51:03.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-5361" for this suite.

• [SLOW TEST:11.242 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":147,"skipped":2483,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:51:04.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on node default medium
Jan 21 00:51:04.092: INFO: Waiting up to 5m0s for pod "pod-84cdaab0-9583-446c-87ed-71dc0cee024a" in namespace "emptydir-2692" to be "success or failure"
Jan 21 00:51:04.114: INFO: Pod "pod-84cdaab0-9583-446c-87ed-71dc0cee024a": Phase="Pending", Reason="", readiness=false. Elapsed: 22.086099ms
Jan 21 00:51:06.122: INFO: Pod "pod-84cdaab0-9583-446c-87ed-71dc0cee024a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029437567s
Jan 21 00:51:08.130: INFO: Pod "pod-84cdaab0-9583-446c-87ed-71dc0cee024a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037538767s
Jan 21 00:51:10.138: INFO: Pod "pod-84cdaab0-9583-446c-87ed-71dc0cee024a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045791328s
Jan 21 00:51:12.148: INFO: Pod "pod-84cdaab0-9583-446c-87ed-71dc0cee024a": Phase="Pending", Reason="", readiness=false. Elapsed: 8.055922771s
Jan 21 00:51:14.154: INFO: Pod "pod-84cdaab0-9583-446c-87ed-71dc0cee024a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.061603829s
STEP: Saw pod success
Jan 21 00:51:14.154: INFO: Pod "pod-84cdaab0-9583-446c-87ed-71dc0cee024a" satisfied condition "success or failure"
Jan 21 00:51:14.158: INFO: Trying to get logs from node jerma-node pod pod-84cdaab0-9583-446c-87ed-71dc0cee024a container test-container: 
STEP: delete the pod
Jan 21 00:51:14.270: INFO: Waiting for pod pod-84cdaab0-9583-446c-87ed-71dc0cee024a to disappear
Jan 21 00:51:14.286: INFO: Pod pod-84cdaab0-9583-446c-87ed-71dc0cee024a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:51:14.286: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2692" for this suite.

• [SLOW TEST:10.314 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":148,"skipped":2496,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:51:14.323: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-b57c518a-f812-4370-b448-7b9712bd511a
STEP: Creating a pod to test consume configMaps
Jan 21 00:51:14.479: INFO: Waiting up to 5m0s for pod "pod-configmaps-e438b9b8-ca30-4981-ad06-e9007e0a49b8" in namespace "configmap-3716" to be "success or failure"
Jan 21 00:51:14.502: INFO: Pod "pod-configmaps-e438b9b8-ca30-4981-ad06-e9007e0a49b8": Phase="Pending", Reason="", readiness=false. Elapsed: 21.922215ms
Jan 21 00:51:16.577: INFO: Pod "pod-configmaps-e438b9b8-ca30-4981-ad06-e9007e0a49b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096978975s
Jan 21 00:51:18.850: INFO: Pod "pod-configmaps-e438b9b8-ca30-4981-ad06-e9007e0a49b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.370585329s
Jan 21 00:51:20.864: INFO: Pod "pod-configmaps-e438b9b8-ca30-4981-ad06-e9007e0a49b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.384290114s
Jan 21 00:51:22.880: INFO: Pod "pod-configmaps-e438b9b8-ca30-4981-ad06-e9007e0a49b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.399933802s
STEP: Saw pod success
Jan 21 00:51:22.880: INFO: Pod "pod-configmaps-e438b9b8-ca30-4981-ad06-e9007e0a49b8" satisfied condition "success or failure"
Jan 21 00:51:22.886: INFO: Trying to get logs from node jerma-node pod pod-configmaps-e438b9b8-ca30-4981-ad06-e9007e0a49b8 container configmap-volume-test: 
STEP: delete the pod
Jan 21 00:51:22.932: INFO: Waiting for pod pod-configmaps-e438b9b8-ca30-4981-ad06-e9007e0a49b8 to disappear
Jan 21 00:51:22.938: INFO: Pod pod-configmaps-e438b9b8-ca30-4981-ad06-e9007e0a49b8 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:51:22.938: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3716" for this suite.

• [SLOW TEST:8.635 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":149,"skipped":2537,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:51:22.961: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
Jan 21 00:51:23.087: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 21 00:51:23.158: INFO: Waiting for terminating namespaces to be deleted...
Jan 21 00:51:23.161: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 21 00:51:23.169: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 21 00:51:23.169: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 21 00:51:23.169: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 21 00:51:23.169: INFO: 	Container weave ready: true, restart count 1
Jan 21 00:51:23.169: INFO: 	Container weave-npc ready: true, restart count 0
Jan 21 00:51:23.169: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 21 00:51:23.186: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 21 00:51:23.186: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 21 00:51:23.186: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 21 00:51:23.186: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 21 00:51:23.186: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 21 00:51:23.186: INFO: 	Container weave ready: true, restart count 0
Jan 21 00:51:23.186: INFO: 	Container weave-npc ready: true, restart count 0
Jan 21 00:51:23.186: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 21 00:51:23.186: INFO: 	Container kube-scheduler ready: true, restart count 3
Jan 21 00:51:23.186: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 21 00:51:23.186: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 21 00:51:23.186: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 21 00:51:23.186: INFO: 	Container etcd ready: true, restart count 1
Jan 21 00:51:23.186: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 21 00:51:23.186: INFO: 	Container coredns ready: true, restart count 0
Jan 21 00:51:23.186: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 21 00:51:23.186: INFO: 	Container coredns ready: true, restart count 0
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ebc09ef1575458], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15ebc09ef82ad108], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:51:24.216: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9304" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78
•{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching  [Conformance]","total":278,"completed":150,"skipped":2572,"failed":0}

------------------------------
[sig-storage] EmptyDir volumes 
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:51:24.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on tmpfs
Jan 21 00:51:24.439: INFO: Waiting up to 5m0s for pod "pod-d9c67bea-391a-4802-b077-c03f32863f0c" in namespace "emptydir-1975" to be "success or failure"
Jan 21 00:51:24.455: INFO: Pod "pod-d9c67bea-391a-4802-b077-c03f32863f0c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.893861ms
Jan 21 00:51:26.470: INFO: Pod "pod-d9c67bea-391a-4802-b077-c03f32863f0c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031147317s
Jan 21 00:51:28.480: INFO: Pod "pod-d9c67bea-391a-4802-b077-c03f32863f0c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041357212s
Jan 21 00:51:30.492: INFO: Pod "pod-d9c67bea-391a-4802-b077-c03f32863f0c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.053698622s
Jan 21 00:51:32.501: INFO: Pod "pod-d9c67bea-391a-4802-b077-c03f32863f0c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.062658338s
Jan 21 00:51:34.511: INFO: Pod "pod-d9c67bea-391a-4802-b077-c03f32863f0c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.072356347s
STEP: Saw pod success
Jan 21 00:51:34.511: INFO: Pod "pod-d9c67bea-391a-4802-b077-c03f32863f0c" satisfied condition "success or failure"
Jan 21 00:51:34.516: INFO: Trying to get logs from node jerma-node pod pod-d9c67bea-391a-4802-b077-c03f32863f0c container test-container: 
STEP: delete the pod
Jan 21 00:51:34.559: INFO: Waiting for pod pod-d9c67bea-391a-4802-b077-c03f32863f0c to disappear
Jan 21 00:51:34.565: INFO: Pod pod-d9c67bea-391a-4802-b077-c03f32863f0c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:51:34.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1975" for this suite.

• [SLOW TEST:10.350 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2572,"failed":0}
SSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:51:34.579: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 21 00:51:41.473: INFO: Successfully updated pod "labelsupdateab17ebfe-37ad-4c41-b867-52a400413a57"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:51:43.545: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3156" for this suite.

• [SLOW TEST:8.982 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2576,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:51:43.562: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 21 00:51:43.782: INFO: Waiting up to 5m0s for pod "downwardapi-volume-84c4ad75-aa02-451e-9f29-2bfe207a9ad3" in namespace "projected-6791" to be "success or failure"
Jan 21 00:51:43.822: INFO: Pod "downwardapi-volume-84c4ad75-aa02-451e-9f29-2bfe207a9ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 40.199947ms
Jan 21 00:51:45.837: INFO: Pod "downwardapi-volume-84c4ad75-aa02-451e-9f29-2bfe207a9ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05427156s
Jan 21 00:51:47.845: INFO: Pod "downwardapi-volume-84c4ad75-aa02-451e-9f29-2bfe207a9ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.062406122s
Jan 21 00:51:49.855: INFO: Pod "downwardapi-volume-84c4ad75-aa02-451e-9f29-2bfe207a9ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.072606352s
Jan 21 00:51:51.866: INFO: Pod "downwardapi-volume-84c4ad75-aa02-451e-9f29-2bfe207a9ad3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.0833931s
Jan 21 00:51:53.884: INFO: Pod "downwardapi-volume-84c4ad75-aa02-451e-9f29-2bfe207a9ad3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.101322726s
STEP: Saw pod success
Jan 21 00:51:53.884: INFO: Pod "downwardapi-volume-84c4ad75-aa02-451e-9f29-2bfe207a9ad3" satisfied condition "success or failure"
Jan 21 00:51:53.901: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-84c4ad75-aa02-451e-9f29-2bfe207a9ad3 container client-container: 
STEP: delete the pod
Jan 21 00:51:53.973: INFO: Waiting for pod downwardapi-volume-84c4ad75-aa02-451e-9f29-2bfe207a9ad3 to disappear
Jan 21 00:51:53.985: INFO: Pod downwardapi-volume-84c4ad75-aa02-451e-9f29-2bfe207a9ad3 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:51:53.985: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6791" for this suite.

• [SLOW TEST:10.508 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":153,"skipped":2605,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:51:54.071: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test override arguments
Jan 21 00:51:54.234: INFO: Waiting up to 5m0s for pod "client-containers-e6548a42-cd36-4f1a-bfd3-5d4e32a9621c" in namespace "containers-2283" to be "success or failure"
Jan 21 00:51:54.262: INFO: Pod "client-containers-e6548a42-cd36-4f1a-bfd3-5d4e32a9621c": Phase="Pending", Reason="", readiness=false. Elapsed: 27.705792ms
Jan 21 00:51:56.273: INFO: Pod "client-containers-e6548a42-cd36-4f1a-bfd3-5d4e32a9621c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039496456s
Jan 21 00:51:58.281: INFO: Pod "client-containers-e6548a42-cd36-4f1a-bfd3-5d4e32a9621c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047380774s
Jan 21 00:52:00.309: INFO: Pod "client-containers-e6548a42-cd36-4f1a-bfd3-5d4e32a9621c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07566996s
Jan 21 00:52:02.317: INFO: Pod "client-containers-e6548a42-cd36-4f1a-bfd3-5d4e32a9621c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.083564159s
STEP: Saw pod success
Jan 21 00:52:02.318: INFO: Pod "client-containers-e6548a42-cd36-4f1a-bfd3-5d4e32a9621c" satisfied condition "success or failure"
Jan 21 00:52:02.320: INFO: Trying to get logs from node jerma-node pod client-containers-e6548a42-cd36-4f1a-bfd3-5d4e32a9621c container test-container: 
STEP: delete the pod
Jan 21 00:52:02.405: INFO: Waiting for pod client-containers-e6548a42-cd36-4f1a-bfd3-5d4e32a9621c to disappear
Jan 21 00:52:02.426: INFO: Pod client-containers-e6548a42-cd36-4f1a-bfd3-5d4e32a9621c no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:52:02.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-2283" for this suite.

• [SLOW TEST:8.363 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2612,"failed":0}
SSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:52:02.435: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:52:14.639: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-5116" for this suite.

• [SLOW TEST:12.219 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":155,"skipped":2618,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:52:14.657: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-0b3387a8-5851-4ad5-a66d-cee49d042c10
STEP: Creating a pod to test consume configMaps
Jan 21 00:52:14.796: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-219aecc8-e6b3-4247-8501-8f0978b9ecd1" in namespace "projected-8217" to be "success or failure"
Jan 21 00:52:14.817: INFO: Pod "pod-projected-configmaps-219aecc8-e6b3-4247-8501-8f0978b9ecd1": Phase="Pending", Reason="", readiness=false. Elapsed: 20.369117ms
Jan 21 00:52:16.826: INFO: Pod "pod-projected-configmaps-219aecc8-e6b3-4247-8501-8f0978b9ecd1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029330915s
Jan 21 00:52:18.844: INFO: Pod "pod-projected-configmaps-219aecc8-e6b3-4247-8501-8f0978b9ecd1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047012062s
Jan 21 00:52:20.862: INFO: Pod "pod-projected-configmaps-219aecc8-e6b3-4247-8501-8f0978b9ecd1": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065537317s
Jan 21 00:52:22.876: INFO: Pod "pod-projected-configmaps-219aecc8-e6b3-4247-8501-8f0978b9ecd1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.079240908s
STEP: Saw pod success
Jan 21 00:52:22.877: INFO: Pod "pod-projected-configmaps-219aecc8-e6b3-4247-8501-8f0978b9ecd1" satisfied condition "success or failure"
Jan 21 00:52:22.883: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-219aecc8-e6b3-4247-8501-8f0978b9ecd1 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 21 00:52:23.059: INFO: Waiting for pod pod-projected-configmaps-219aecc8-e6b3-4247-8501-8f0978b9ecd1 to disappear
Jan 21 00:52:23.065: INFO: Pod pod-projected-configmaps-219aecc8-e6b3-4247-8501-8f0978b9ecd1 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:52:23.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8217" for this suite.

• [SLOW TEST:8.421 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":156,"skipped":2689,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:52:23.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Jan 21 00:52:23.384: INFO: Number of nodes with available pods: 0
Jan 21 00:52:23.384: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:25.270: INFO: Number of nodes with available pods: 0
Jan 21 00:52:25.270: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:25.520: INFO: Number of nodes with available pods: 0
Jan 21 00:52:25.521: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:26.399: INFO: Number of nodes with available pods: 0
Jan 21 00:52:26.399: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:27.444: INFO: Number of nodes with available pods: 0
Jan 21 00:52:27.444: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:29.257: INFO: Number of nodes with available pods: 0
Jan 21 00:52:29.257: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:29.746: INFO: Number of nodes with available pods: 0
Jan 21 00:52:29.746: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:30.517: INFO: Number of nodes with available pods: 0
Jan 21 00:52:30.517: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:31.594: INFO: Number of nodes with available pods: 0
Jan 21 00:52:31.594: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:32.422: INFO: Number of nodes with available pods: 0
Jan 21 00:52:32.422: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:33.433: INFO: Number of nodes with available pods: 2
Jan 21 00:52:33.433: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Jan 21 00:52:33.465: INFO: Number of nodes with available pods: 1
Jan 21 00:52:33.465: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:34.555: INFO: Number of nodes with available pods: 1
Jan 21 00:52:34.556: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:35.481: INFO: Number of nodes with available pods: 1
Jan 21 00:52:35.481: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:36.482: INFO: Number of nodes with available pods: 1
Jan 21 00:52:36.482: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:37.481: INFO: Number of nodes with available pods: 1
Jan 21 00:52:37.481: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:38.488: INFO: Number of nodes with available pods: 1
Jan 21 00:52:38.488: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:39.478: INFO: Number of nodes with available pods: 1
Jan 21 00:52:39.478: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:40.541: INFO: Number of nodes with available pods: 1
Jan 21 00:52:40.541: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:41.483: INFO: Number of nodes with available pods: 1
Jan 21 00:52:41.483: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:42.483: INFO: Number of nodes with available pods: 1
Jan 21 00:52:42.484: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:43.485: INFO: Number of nodes with available pods: 1
Jan 21 00:52:43.485: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:44.485: INFO: Number of nodes with available pods: 1
Jan 21 00:52:44.486: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:45.483: INFO: Number of nodes with available pods: 1
Jan 21 00:52:45.483: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:46.477: INFO: Number of nodes with available pods: 1
Jan 21 00:52:46.477: INFO: Node jerma-node is running more than one daemon pod
Jan 21 00:52:47.482: INFO: Number of nodes with available pods: 2
Jan 21 00:52:47.482: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3801, will wait for the garbage collector to delete the pods
Jan 21 00:52:47.567: INFO: Deleting DaemonSet.extensions daemon-set took: 24.661449ms
Jan 21 00:52:47.968: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.681748ms
Jan 21 00:53:02.479: INFO: Number of nodes with available pods: 0
Jan 21 00:53:02.480: INFO: Number of running nodes: 0, number of available pods: 0
Jan 21 00:53:02.485: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3801/daemonsets","resourceVersion":"3302588"},"items":null}

Jan 21 00:53:02.489: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3801/pods","resourceVersion":"3302588"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:53:02.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-3801" for this suite.

• [SLOW TEST:39.554 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":157,"skipped":2719,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:53:02.633: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating Agnhost RC
Jan 21 00:53:02.751: INFO: namespace kubectl-462
Jan 21 00:53:02.752: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-462'
Jan 21 00:53:05.622: INFO: stderr: ""
Jan 21 00:53:05.622: INFO: stdout: "replicationcontroller/agnhost-master created\n"
STEP: Waiting for Agnhost master to start.
Jan 21 00:53:06.637: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:53:06.638: INFO: Found 0 / 1
Jan 21 00:53:07.700: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:53:07.700: INFO: Found 0 / 1
Jan 21 00:53:08.646: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:53:08.647: INFO: Found 0 / 1
Jan 21 00:53:09.637: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:53:09.637: INFO: Found 0 / 1
Jan 21 00:53:10.634: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:53:10.634: INFO: Found 0 / 1
Jan 21 00:53:11.629: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:53:11.629: INFO: Found 0 / 1
Jan 21 00:53:12.627: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:53:12.628: INFO: Found 0 / 1
Jan 21 00:53:13.633: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:53:13.633: INFO: Found 0 / 1
Jan 21 00:53:14.636: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:53:14.637: INFO: Found 1 / 1
Jan 21 00:53:14.637: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Jan 21 00:53:14.649: INFO: Selector matched 1 pods for map[app:agnhost]
Jan 21 00:53:14.650: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Jan 21 00:53:14.650: INFO: wait on agnhost-master startup in kubectl-462 
Jan 21 00:53:14.650: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-rzb4q agnhost-master --namespace=kubectl-462'
Jan 21 00:53:14.822: INFO: stderr: ""
Jan 21 00:53:14.823: INFO: stdout: "Paused\n"
STEP: exposing RC
Jan 21 00:53:14.823: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-462'
Jan 21 00:53:15.027: INFO: stderr: ""
Jan 21 00:53:15.027: INFO: stdout: "service/rm2 exposed\n"
Jan 21 00:53:15.032: INFO: Service rm2 in namespace kubectl-462 found.
STEP: exposing service
Jan 21 00:53:17.045: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-462'
Jan 21 00:53:17.278: INFO: stderr: ""
Jan 21 00:53:17.278: INFO: stdout: "service/rm3 exposed\n"
Jan 21 00:53:17.384: INFO: Service rm3 in namespace kubectl-462 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:53:19.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-462" for this suite.

• [SLOW TEST:16.776 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1296
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc  [Conformance]","total":278,"completed":158,"skipped":2729,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:53:19.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 21 00:53:19.523: INFO: PodSpec: initContainers in spec.initContainers
Jan 21 00:54:24.351: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-31391c78-d1a0-475f-a2d0-9d5e3985fe9a", GenerateName:"", Namespace:"init-container-2044", SelfLink:"/api/v1/namespaces/init-container-2044/pods/pod-init-31391c78-d1a0-475f-a2d0-9d5e3985fe9a", UID:"951ddbd4-19a8-4ac1-9ea4-f15010676637", ResourceVersion:"3302872", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63715164799, loc:(*time.Location)(0x7d7cf00)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"523437471"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-67mdd", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0053e1640), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-67mdd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-67mdd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-67mdd", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc004a85868), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0044e7aa0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004a858f0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc004a85910)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc004a85918), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc004a8591c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164799, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164799, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164799, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164799, loc:(*time.Location)(0x7d7cf00)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.250", PodIP:"10.44.0.2", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.2"}}, StartTime:(*v1.Time)(0xc005168000), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0029d6690)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0029d6700)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://6dc95b85c0c8d774bef66fc037c5e027562904a20e856557ebdeb2d320d81fbe", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc005168040), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc005168020), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc004a8599f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:54:24.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2044" for this suite.

• [SLOW TEST:64.984 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":159,"skipped":2738,"failed":0}
SSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:54:24.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Jan 21 00:54:32.546: INFO: &Pod{ObjectMeta:{send-events-76691388-f041-415a-9835-46f09ce30aef  events-6918 /api/v1/namespaces/events-6918/pods/send-events-76691388-f041-415a-9835-46f09ce30aef b6dcf9e3-5817-46c8-a0dd-d034ce7ca75d 3302912 0 2020-01-21 00:54:24 +0000 UTC   map[name:foo time:488881213] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-zvbns,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-zvbns,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-zvbns,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 00:54:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 00:54:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 00:54:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 00:54:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-21 00:54:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-21 00:54:30 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://7bf74c209716b89ed087c728d460619adac72e1224c5034ba1c5f267e574e3fb,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Jan 21 00:54:34.560: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Jan 21 00:54:36.578: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:54:36.596: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6918" for this suite.

• [SLOW TEST:12.235 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]","total":278,"completed":160,"skipped":2744,"failed":0}
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:54:36.631: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-7af2b6d9-3984-4c99-9063-158a91941843
STEP: Creating a pod to test consume configMaps
Jan 21 00:54:36.753: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bb84ba81-91b2-44b3-b170-87ab2aeb2fa7" in namespace "projected-4601" to be "success or failure"
Jan 21 00:54:36.772: INFO: Pod "pod-projected-configmaps-bb84ba81-91b2-44b3-b170-87ab2aeb2fa7": Phase="Pending", Reason="", readiness=false. Elapsed: 18.58165ms
Jan 21 00:54:38.780: INFO: Pod "pod-projected-configmaps-bb84ba81-91b2-44b3-b170-87ab2aeb2fa7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.026960053s
Jan 21 00:54:40.787: INFO: Pod "pod-projected-configmaps-bb84ba81-91b2-44b3-b170-87ab2aeb2fa7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033701192s
Jan 21 00:54:42.799: INFO: Pod "pod-projected-configmaps-bb84ba81-91b2-44b3-b170-87ab2aeb2fa7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045290473s
Jan 21 00:54:44.809: INFO: Pod "pod-projected-configmaps-bb84ba81-91b2-44b3-b170-87ab2aeb2fa7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.055337918s
STEP: Saw pod success
Jan 21 00:54:44.809: INFO: Pod "pod-projected-configmaps-bb84ba81-91b2-44b3-b170-87ab2aeb2fa7" satisfied condition "success or failure"
Jan 21 00:54:44.824: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-bb84ba81-91b2-44b3-b170-87ab2aeb2fa7 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 21 00:54:44.901: INFO: Waiting for pod pod-projected-configmaps-bb84ba81-91b2-44b3-b170-87ab2aeb2fa7 to disappear
Jan 21 00:54:44.913: INFO: Pod pod-projected-configmaps-bb84ba81-91b2-44b3-b170-87ab2aeb2fa7 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:54:44.914: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4601" for this suite.

• [SLOW TEST:8.292 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2748,"failed":0}
SSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:54:44.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-9988
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-9988
I0121 00:54:45.543917       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-9988, replica count: 2
I0121 00:54:48.596762       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 00:54:51.598004       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 00:54:54.600255       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 00:54:57.600820       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 21 00:54:57.600: INFO: Creating new exec pod
Jan 21 00:55:06.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9988 execpoddndgb -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 21 00:55:08.134: INFO: stderr: "I0121 00:55:06.923235    3415 log.go:172] (0xc000970630) (0xc0009ec000) Create stream\nI0121 00:55:06.923356    3415 log.go:172] (0xc000970630) (0xc0009ec000) Stream added, broadcasting: 1\nI0121 00:55:06.928191    3415 log.go:172] (0xc000970630) Reply frame received for 1\nI0121 00:55:06.928231    3415 log.go:172] (0xc000970630) (0xc0008e0000) Create stream\nI0121 00:55:06.928238    3415 log.go:172] (0xc000970630) (0xc0008e0000) Stream added, broadcasting: 3\nI0121 00:55:06.929494    3415 log.go:172] (0xc000970630) Reply frame received for 3\nI0121 00:55:06.929518    3415 log.go:172] (0xc000970630) (0xc0006cba40) Create stream\nI0121 00:55:06.929526    3415 log.go:172] (0xc000970630) (0xc0006cba40) Stream added, broadcasting: 5\nI0121 00:55:06.930835    3415 log.go:172] (0xc000970630) Reply frame received for 5\nI0121 00:55:07.984182    3415 log.go:172] (0xc000970630) Data frame received for 5\nI0121 00:55:07.984259    3415 log.go:172] (0xc0006cba40) (5) Data frame handling\nI0121 00:55:07.984339    3415 log.go:172] (0xc0006cba40) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0121 00:55:07.996072    3415 log.go:172] (0xc000970630) Data frame received for 5\nI0121 00:55:07.996697    3415 log.go:172] (0xc0006cba40) (5) Data frame handling\nI0121 00:55:07.996799    3415 log.go:172] (0xc0006cba40) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0121 00:55:08.123678    3415 log.go:172] (0xc000970630) Data frame received for 1\nI0121 00:55:08.123786    3415 log.go:172] (0xc000970630) (0xc0008e0000) Stream removed, broadcasting: 3\nI0121 00:55:08.123872    3415 log.go:172] (0xc0009ec000) (1) Data frame handling\nI0121 00:55:08.123889    3415 log.go:172] (0xc0009ec000) (1) Data frame sent\nI0121 00:55:08.123896    3415 log.go:172] (0xc000970630) (0xc0009ec000) Stream removed, broadcasting: 1\nI0121 00:55:08.124416    3415 log.go:172] (0xc000970630) (0xc0006cba40) Stream removed, broadcasting: 5\nI0121 00:55:08.124450    3415 log.go:172] (0xc000970630) (0xc0009ec000) Stream removed, broadcasting: 1\nI0121 00:55:08.124459    3415 log.go:172] (0xc000970630) (0xc0008e0000) Stream removed, broadcasting: 3\nI0121 00:55:08.124467    3415 log.go:172] (0xc000970630) (0xc0006cba40) Stream removed, broadcasting: 5\nI0121 00:55:08.124987    3415 log.go:172] (0xc000970630) Go away received\n"
Jan 21 00:55:08.134: INFO: stdout: ""
Jan 21 00:55:08.135: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9988 execpoddndgb -- /bin/sh -x -c nc -zv -t -w 2 10.96.80.216 80'
Jan 21 00:55:08.535: INFO: stderr: "I0121 00:55:08.333809    3435 log.go:172] (0xc000a3a000) (0xc000964000) Create stream\nI0121 00:55:08.334249    3435 log.go:172] (0xc000a3a000) (0xc000964000) Stream added, broadcasting: 1\nI0121 00:55:08.341410    3435 log.go:172] (0xc000a3a000) Reply frame received for 1\nI0121 00:55:08.341662    3435 log.go:172] (0xc000a3a000) (0xc0009640a0) Create stream\nI0121 00:55:08.341685    3435 log.go:172] (0xc000a3a000) (0xc0009640a0) Stream added, broadcasting: 3\nI0121 00:55:08.343986    3435 log.go:172] (0xc000a3a000) Reply frame received for 3\nI0121 00:55:08.344045    3435 log.go:172] (0xc000a3a000) (0xc0006c9b80) Create stream\nI0121 00:55:08.344076    3435 log.go:172] (0xc000a3a000) (0xc0006c9b80) Stream added, broadcasting: 5\nI0121 00:55:08.346255    3435 log.go:172] (0xc000a3a000) Reply frame received for 5\nI0121 00:55:08.422787    3435 log.go:172] (0xc000a3a000) Data frame received for 5\nI0121 00:55:08.422927    3435 log.go:172] (0xc0006c9b80) (5) Data frame handling\nI0121 00:55:08.422982    3435 log.go:172] (0xc0006c9b80) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.80.216 80\nI0121 00:55:08.424582    3435 log.go:172] (0xc000a3a000) Data frame received for 5\nI0121 00:55:08.424601    3435 log.go:172] (0xc0006c9b80) (5) Data frame handling\nI0121 00:55:08.424616    3435 log.go:172] (0xc0006c9b80) (5) Data frame sent\nConnection to 10.96.80.216 80 port [tcp/http] succeeded!\nI0121 00:55:08.512834    3435 log.go:172] (0xc000a3a000) (0xc0006c9b80) Stream removed, broadcasting: 5\nI0121 00:55:08.513107    3435 log.go:172] (0xc000a3a000) Data frame received for 1\nI0121 00:55:08.513207    3435 log.go:172] (0xc000a3a000) (0xc0009640a0) Stream removed, broadcasting: 3\nI0121 00:55:08.513289    3435 log.go:172] (0xc000964000) (1) Data frame handling\nI0121 00:55:08.513411    3435 log.go:172] (0xc000964000) (1) Data frame sent\nI0121 00:55:08.513456    3435 log.go:172] (0xc000a3a000) (0xc000964000) Stream removed, broadcasting: 1\nI0121 00:55:08.513513    3435 log.go:172] (0xc000a3a000) Go away received\nI0121 00:55:08.515112    3435 log.go:172] (0xc000a3a000) (0xc000964000) Stream removed, broadcasting: 1\nI0121 00:55:08.515126    3435 log.go:172] (0xc000a3a000) (0xc0009640a0) Stream removed, broadcasting: 3\nI0121 00:55:08.515137    3435 log.go:172] (0xc000a3a000) (0xc0006c9b80) Stream removed, broadcasting: 5\n"
Jan 21 00:55:08.536: INFO: stdout: ""
Jan 21 00:55:08.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9988 execpoddndgb -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32612'
Jan 21 00:55:09.474: INFO: stderr: "I0121 00:55:08.706917    3455 log.go:172] (0xc0000f5600) (0xc00097e8c0) Create stream\nI0121 00:55:08.707090    3455 log.go:172] (0xc0000f5600) (0xc00097e8c0) Stream added, broadcasting: 1\nI0121 00:55:08.713266    3455 log.go:172] (0xc0000f5600) Reply frame received for 1\nI0121 00:55:08.713355    3455 log.go:172] (0xc0000f5600) (0xc00065fd60) Create stream\nI0121 00:55:08.713372    3455 log.go:172] (0xc0000f5600) (0xc00065fd60) Stream added, broadcasting: 3\nI0121 00:55:08.714789    3455 log.go:172] (0xc0000f5600) Reply frame received for 3\nI0121 00:55:08.714892    3455 log.go:172] (0xc0000f5600) (0xc0005d6960) Create stream\nI0121 00:55:08.714915    3455 log.go:172] (0xc0000f5600) (0xc0005d6960) Stream added, broadcasting: 5\nI0121 00:55:08.717526    3455 log.go:172] (0xc0000f5600) Reply frame received for 5\nI0121 00:55:09.372236    3455 log.go:172] (0xc0000f5600) Data frame received for 5\nI0121 00:55:09.372506    3455 log.go:172] (0xc0005d6960) (5) Data frame handling\nI0121 00:55:09.372612    3455 log.go:172] (0xc0005d6960) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32612\nConnection to 10.96.2.250 32612 port [tcp/32612] succeeded!\nI0121 00:55:09.462702    3455 log.go:172] (0xc0000f5600) Data frame received for 1\nI0121 00:55:09.462820    3455 log.go:172] (0xc0000f5600) (0xc0005d6960) Stream removed, broadcasting: 5\nI0121 00:55:09.462968    3455 log.go:172] (0xc0000f5600) (0xc00065fd60) Stream removed, broadcasting: 3\nI0121 00:55:09.463128    3455 log.go:172] (0xc00097e8c0) (1) Data frame handling\nI0121 00:55:09.463320    3455 log.go:172] (0xc00097e8c0) (1) Data frame sent\nI0121 00:55:09.463364    3455 log.go:172] (0xc0000f5600) (0xc00097e8c0) Stream removed, broadcasting: 1\nI0121 00:55:09.463486    3455 log.go:172] (0xc0000f5600) Go away received\nI0121 00:55:09.464977    3455 log.go:172] (0xc0000f5600) (0xc00097e8c0) Stream removed, broadcasting: 1\nI0121 00:55:09.465086    3455 log.go:172] (0xc0000f5600) (0xc00065fd60) Stream removed, broadcasting: 3\nI0121 00:55:09.465098    3455 log.go:172] (0xc0000f5600) (0xc0005d6960) Stream removed, broadcasting: 5\n"
Jan 21 00:55:09.474: INFO: stdout: ""
Jan 21 00:55:09.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9988 execpoddndgb -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32612'
Jan 21 00:55:09.858: INFO: stderr: "I0121 00:55:09.669057    3473 log.go:172] (0xc00061a9a0) (0xc00097e0a0) Create stream\nI0121 00:55:09.669321    3473 log.go:172] (0xc00061a9a0) (0xc00097e0a0) Stream added, broadcasting: 1\nI0121 00:55:09.673164    3473 log.go:172] (0xc00061a9a0) Reply frame received for 1\nI0121 00:55:09.673241    3473 log.go:172] (0xc00061a9a0) (0xc00065bb80) Create stream\nI0121 00:55:09.673254    3473 log.go:172] (0xc00061a9a0) (0xc00065bb80) Stream added, broadcasting: 3\nI0121 00:55:09.674402    3473 log.go:172] (0xc00061a9a0) Reply frame received for 3\nI0121 00:55:09.674479    3473 log.go:172] (0xc00061a9a0) (0xc00097e140) Create stream\nI0121 00:55:09.674491    3473 log.go:172] (0xc00061a9a0) (0xc00097e140) Stream added, broadcasting: 5\nI0121 00:55:09.677479    3473 log.go:172] (0xc00061a9a0) Reply frame received for 5\nI0121 00:55:09.738355    3473 log.go:172] (0xc00061a9a0) Data frame received for 5\nI0121 00:55:09.738453    3473 log.go:172] (0xc00097e140) (5) Data frame handling\nI0121 00:55:09.738537    3473 log.go:172] (0xc00097e140) (5) Data frame sent\n+ nc -zvI0121 00:55:09.740611    3473 log.go:172] (0xc00061a9a0) Data frame received for 5\nI0121 00:55:09.740696    3473 log.go:172] (0xc00097e140) (5) Data frame handling\nI0121 00:55:09.740720    3473 log.go:172] (0xc00097e140) (5) Data frame sent\n -t -w 2 10.96.1.234 32612\nI0121 00:55:09.745293    3473 log.go:172] (0xc00061a9a0) Data frame received for 5\nI0121 00:55:09.745349    3473 log.go:172] (0xc00097e140) (5) Data frame handling\nI0121 00:55:09.745368    3473 log.go:172] (0xc00097e140) (5) Data frame sent\nConnection to 10.96.1.234 32612 port [tcp/32612] succeeded!\nI0121 00:55:09.843328    3473 log.go:172] (0xc00061a9a0) Data frame received for 1\nI0121 00:55:09.843515    3473 log.go:172] (0xc00061a9a0) (0xc00065bb80) Stream removed, broadcasting: 3\nI0121 00:55:09.843609    3473 log.go:172] (0xc00097e0a0) (1) Data frame handling\nI0121 00:55:09.843628    3473 log.go:172] (0xc00097e0a0) (1) Data frame sent\nI0121 00:55:09.843672    3473 log.go:172] (0xc00061a9a0) (0xc00097e140) Stream removed, broadcasting: 5\nI0121 00:55:09.843718    3473 log.go:172] (0xc00061a9a0) (0xc00097e0a0) Stream removed, broadcasting: 1\nI0121 00:55:09.843737    3473 log.go:172] (0xc00061a9a0) Go away received\nI0121 00:55:09.845486    3473 log.go:172] (0xc00061a9a0) (0xc00097e0a0) Stream removed, broadcasting: 1\nI0121 00:55:09.845531    3473 log.go:172] (0xc00061a9a0) (0xc00065bb80) Stream removed, broadcasting: 3\nI0121 00:55:09.845547    3473 log.go:172] (0xc00061a9a0) (0xc00097e140) Stream removed, broadcasting: 5\n"
Jan 21 00:55:09.858: INFO: stdout: ""
Jan 21 00:55:09.858: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:55:09.902: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-9988" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691

• [SLOW TEST:24.993 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":162,"skipped":2752,"failed":0}
S
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:55:09.917: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:55:10.013: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Jan 21 00:55:15.036: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 21 00:55:21.405: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:67
Jan 21 00:55:29.743: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-6610 /apis/apps/v1/namespaces/deployment-6610/deployments/test-cleanup-deployment 6e90c3c2-e6fc-4d93-97c8-f988409a98f5 3303204 1 2020-01-21 00:55:21 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002bc7438  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-21 00:55:21 +0000 UTC,LastTransitionTime:2020-01-21 00:55:21 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-55ffc6b7b6" has successfully progressed.,LastUpdateTime:2020-01-21 00:55:29 +0000 UTC,LastTransitionTime:2020-01-21 00:55:21 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 21 00:55:29.747: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6  deployment-6610 /apis/apps/v1/namespaces/deployment-6610/replicasets/test-cleanup-deployment-55ffc6b7b6 6aa29543-cb20-453c-bf2a-a7d522f08061 3303193 1 2020-01-21 00:55:21 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 6e90c3c2-e6fc-4d93-97c8-f988409a98f5 0xc0008902f7 0xc0008902f8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0008904d8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 21 00:55:29.752: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-lwpbq" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-lwpbq test-cleanup-deployment-55ffc6b7b6- deployment-6610 /api/v1/namespaces/deployment-6610/pods/test-cleanup-deployment-55ffc6b7b6-lwpbq 3ec2ea40-66db-4661-b28e-1d8f2d3b7bb6 3303192 0 2020-01-21 00:55:21 +0000 UTC   map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 6aa29543-cb20-453c-bf2a-a7d522f08061 0xc000b012f7 0xc000b012f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-w88k7,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-w88k7,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-w88k7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 00:55:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 00:55:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 00:55:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 00:55:21 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-21 00:55:22 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-21 00:55:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://7329ca975b826b9a4a5e26f51adbcb61fbbfdc3bb9e9e5104780b9fe5b5d4ef5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:55:29.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-6610" for this suite.

• [SLOW TEST:19.852 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":163,"skipped":2753,"failed":0}
SS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:55:29.769: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 21 00:55:29.992: INFO: Waiting up to 5m0s for pod "downwardapi-volume-df29ed0b-7b40-4b95-bfbb-4a3ca8c2bfaa" in namespace "downward-api-2350" to be "success or failure"
Jan 21 00:55:30.192: INFO: Pod "downwardapi-volume-df29ed0b-7b40-4b95-bfbb-4a3ca8c2bfaa": Phase="Pending", Reason="", readiness=false. Elapsed: 199.444627ms
Jan 21 00:55:32.206: INFO: Pod "downwardapi-volume-df29ed0b-7b40-4b95-bfbb-4a3ca8c2bfaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.213128461s
Jan 21 00:55:34.213: INFO: Pod "downwardapi-volume-df29ed0b-7b40-4b95-bfbb-4a3ca8c2bfaa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.220736615s
Jan 21 00:55:36.223: INFO: Pod "downwardapi-volume-df29ed0b-7b40-4b95-bfbb-4a3ca8c2bfaa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.23086197s
Jan 21 00:55:38.231: INFO: Pod "downwardapi-volume-df29ed0b-7b40-4b95-bfbb-4a3ca8c2bfaa": Phase="Pending", Reason="", readiness=false. Elapsed: 8.238388478s
Jan 21 00:55:40.244: INFO: Pod "downwardapi-volume-df29ed0b-7b40-4b95-bfbb-4a3ca8c2bfaa": Phase="Pending", Reason="", readiness=false. Elapsed: 10.25146257s
Jan 21 00:55:42.250: INFO: Pod "downwardapi-volume-df29ed0b-7b40-4b95-bfbb-4a3ca8c2bfaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.257091889s
STEP: Saw pod success
Jan 21 00:55:42.250: INFO: Pod "downwardapi-volume-df29ed0b-7b40-4b95-bfbb-4a3ca8c2bfaa" satisfied condition "success or failure"
Jan 21 00:55:42.254: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-df29ed0b-7b40-4b95-bfbb-4a3ca8c2bfaa container client-container: 
STEP: delete the pod
Jan 21 00:55:42.556: INFO: Waiting for pod downwardapi-volume-df29ed0b-7b40-4b95-bfbb-4a3ca8c2bfaa to disappear
Jan 21 00:55:42.568: INFO: Pod downwardapi-volume-df29ed0b-7b40-4b95-bfbb-4a3ca8c2bfaa no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:55:42.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-2350" for this suite.

• [SLOW TEST:12.834 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":164,"skipped":2755,"failed":0}
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:55:42.604: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 21 00:55:43.020: INFO: Waiting up to 5m0s for pod "downwardapi-volume-06ff50bf-4517-4692-8a8a-53a827027453" in namespace "downward-api-3855" to be "success or failure"
Jan 21 00:55:43.028: INFO: Pod "downwardapi-volume-06ff50bf-4517-4692-8a8a-53a827027453": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064849ms
Jan 21 00:55:45.033: INFO: Pod "downwardapi-volume-06ff50bf-4517-4692-8a8a-53a827027453": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013353625s
Jan 21 00:55:47.040: INFO: Pod "downwardapi-volume-06ff50bf-4517-4692-8a8a-53a827027453": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020388846s
Jan 21 00:55:49.064: INFO: Pod "downwardapi-volume-06ff50bf-4517-4692-8a8a-53a827027453": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044195001s
Jan 21 00:55:51.077: INFO: Pod "downwardapi-volume-06ff50bf-4517-4692-8a8a-53a827027453": Phase="Pending", Reason="", readiness=false. Elapsed: 8.057468544s
Jan 21 00:55:53.085: INFO: Pod "downwardapi-volume-06ff50bf-4517-4692-8a8a-53a827027453": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.065257447s
STEP: Saw pod success
Jan 21 00:55:53.085: INFO: Pod "downwardapi-volume-06ff50bf-4517-4692-8a8a-53a827027453" satisfied condition "success or failure"
Jan 21 00:55:53.088: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-06ff50bf-4517-4692-8a8a-53a827027453 container client-container: 
STEP: delete the pod
Jan 21 00:55:53.129: INFO: Waiting for pod downwardapi-volume-06ff50bf-4517-4692-8a8a-53a827027453 to disappear
Jan 21 00:55:53.167: INFO: Pod downwardapi-volume-06ff50bf-4517-4692-8a8a-53a827027453 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:55:53.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3855" for this suite.

• [SLOW TEST:10.574 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2759,"failed":0}
S
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:55:53.178: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Jan 21 00:56:01.973: INFO: Successfully updated pod "pod-update-activedeadlineseconds-585f8367-c77f-4cf8-8dea-db4cb7c47d1d"
Jan 21 00:56:01.973: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-585f8367-c77f-4cf8-8dea-db4cb7c47d1d" in namespace "pods-3134" to be "terminated due to deadline exceeded"
Jan 21 00:56:01.984: INFO: Pod "pod-update-activedeadlineseconds-585f8367-c77f-4cf8-8dea-db4cb7c47d1d": Phase="Running", Reason="", readiness=true. Elapsed: 10.953276ms
Jan 21 00:56:03.991: INFO: Pod "pod-update-activedeadlineseconds-585f8367-c77f-4cf8-8dea-db4cb7c47d1d": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.017467736s
Jan 21 00:56:03.991: INFO: Pod "pod-update-activedeadlineseconds-585f8367-c77f-4cf8-8dea-db4cb7c47d1d" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:56:03.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3134" for this suite.

• [SLOW TEST:10.827 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2760,"failed":0}
SS
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:56:04.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1734
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 21 00:56:04.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-1118'
Jan 21 00:56:04.274: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 21 00:56:04.274: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1739
Jan 21 00:56:08.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-1118'
Jan 21 00:56:08.767: INFO: stderr: ""
Jan 21 00:56:08.768: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:56:08.768: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1118" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image  [Conformance]","total":278,"completed":167,"skipped":2762,"failed":0}
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:56:08.791: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-0ed36a15-f5a7-4424-8234-25661679011c
STEP: Creating a pod to test consume secrets
Jan 21 00:56:09.112: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-d25856c4-4073-4eda-aef7-4148333a5aae" in namespace "projected-8639" to be "success or failure"
Jan 21 00:56:09.170: INFO: Pod "pod-projected-secrets-d25856c4-4073-4eda-aef7-4148333a5aae": Phase="Pending", Reason="", readiness=false. Elapsed: 58.236598ms
Jan 21 00:56:11.178: INFO: Pod "pod-projected-secrets-d25856c4-4073-4eda-aef7-4148333a5aae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066427795s
Jan 21 00:56:13.189: INFO: Pod "pod-projected-secrets-d25856c4-4073-4eda-aef7-4148333a5aae": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076648879s
Jan 21 00:56:15.199: INFO: Pod "pod-projected-secrets-d25856c4-4073-4eda-aef7-4148333a5aae": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087075314s
Jan 21 00:56:17.205: INFO: Pod "pod-projected-secrets-d25856c4-4073-4eda-aef7-4148333a5aae": Phase="Pending", Reason="", readiness=false. Elapsed: 8.093301273s
Jan 21 00:56:19.215: INFO: Pod "pod-projected-secrets-d25856c4-4073-4eda-aef7-4148333a5aae": Phase="Pending", Reason="", readiness=false. Elapsed: 10.103380713s
Jan 21 00:56:21.222: INFO: Pod "pod-projected-secrets-d25856c4-4073-4eda-aef7-4148333a5aae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.110479505s
STEP: Saw pod success
Jan 21 00:56:21.223: INFO: Pod "pod-projected-secrets-d25856c4-4073-4eda-aef7-4148333a5aae" satisfied condition "success or failure"
Jan 21 00:56:21.233: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-d25856c4-4073-4eda-aef7-4148333a5aae container projected-secret-volume-test: 
STEP: delete the pod
Jan 21 00:56:21.406: INFO: Waiting for pod pod-projected-secrets-d25856c4-4073-4eda-aef7-4148333a5aae to disappear
Jan 21 00:56:21.413: INFO: Pod pod-projected-secrets-d25856c4-4073-4eda-aef7-4148333a5aae no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:56:21.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8639" for this suite.

• [SLOW TEST:12.637 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2770,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:56:21.430: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 21 00:56:22.568: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 21 00:56:24.593: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164982, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164982, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164982, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164982, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:56:26.620: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164982, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164982, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164982, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164982, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:56:28.607: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164982, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164982, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164982, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164982, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 21 00:56:31.687: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:56:31.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8582" for this suite.
STEP: Destroying namespace "webhook-8582-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.544 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":169,"skipped":2811,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:56:31.975: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Jan 21 00:56:32.780: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Jan 21 00:56:34.800: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:56:36.807: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:56:38.879: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:56:40.892: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:56:42.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715164992, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 21 00:56:45.896: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:56:45.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:56:47.413: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-4877" for this suite.
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136

• [SLOW TEST:15.615 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":170,"skipped":2817,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:56:47.592: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 21 00:56:48.221: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 21 00:56:50.246: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165008, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165008, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165008, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165008, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:56:52.272: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165008, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165008, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165008, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165008, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:56:54.254: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165008, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165008, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165008, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165008, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:56:56.252: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165008, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165008, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165008, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165008, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 21 00:56:59.291: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:57:11.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8743" for this suite.
STEP: Destroying namespace "webhook-8743-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:24.245 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":171,"skipped":2837,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:57:11.838: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test substitution in container's args
Jan 21 00:57:11.922: INFO: Waiting up to 5m0s for pod "var-expansion-14b82d4e-85db-47ff-83e5-306ee2af6391" in namespace "var-expansion-8270" to be "success or failure"
Jan 21 00:57:11.979: INFO: Pod "var-expansion-14b82d4e-85db-47ff-83e5-306ee2af6391": Phase="Pending", Reason="", readiness=false. Elapsed: 56.825807ms
Jan 21 00:57:13.989: INFO: Pod "var-expansion-14b82d4e-85db-47ff-83e5-306ee2af6391": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066851619s
Jan 21 00:57:15.996: INFO: Pod "var-expansion-14b82d4e-85db-47ff-83e5-306ee2af6391": Phase="Pending", Reason="", readiness=false. Elapsed: 4.073958043s
Jan 21 00:57:18.033: INFO: Pod "var-expansion-14b82d4e-85db-47ff-83e5-306ee2af6391": Phase="Pending", Reason="", readiness=false. Elapsed: 6.110945284s
Jan 21 00:57:20.039: INFO: Pod "var-expansion-14b82d4e-85db-47ff-83e5-306ee2af6391": Phase="Pending", Reason="", readiness=false. Elapsed: 8.11710083s
Jan 21 00:57:22.046: INFO: Pod "var-expansion-14b82d4e-85db-47ff-83e5-306ee2af6391": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.12358218s
STEP: Saw pod success
Jan 21 00:57:22.046: INFO: Pod "var-expansion-14b82d4e-85db-47ff-83e5-306ee2af6391" satisfied condition "success or failure"
Jan 21 00:57:22.049: INFO: Trying to get logs from node jerma-node pod var-expansion-14b82d4e-85db-47ff-83e5-306ee2af6391 container dapi-container: 
STEP: delete the pod
Jan 21 00:57:22.100: INFO: Waiting for pod var-expansion-14b82d4e-85db-47ff-83e5-306ee2af6391 to disappear
Jan 21 00:57:22.110: INFO: Pod var-expansion-14b82d4e-85db-47ff-83e5-306ee2af6391 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:57:22.111: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-8270" for this suite.

• [SLOW TEST:10.288 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":172,"skipped":2854,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run pod 
  should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:57:22.127: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1862
[It] should create a pod from an image when restart is Never  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 21 00:57:22.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-420'
Jan 21 00:57:22.411: INFO: stderr: ""
Jan 21 00:57:22.411: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod was created
[AfterEach] Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1867
Jan 21 00:57:22.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-420'
Jan 21 00:57:28.864: INFO: stderr: ""
Jan 21 00:57:28.865: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:57:28.865: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-420" for this suite.

• [SLOW TEST:6.751 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1858
    should create a pod from an image when restart is Never  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never  [Conformance]","total":278,"completed":173,"skipped":2859,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:57:28.880: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 21 00:57:29.798: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 21 00:57:31.817: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165049, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165049, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165049, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165049, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:57:33.827: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165049, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165049, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165049, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165049, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 00:57:35.840: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165049, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165049, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165049, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165049, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 21 00:57:38.899: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:57:39.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3646" for this suite.
STEP: Destroying namespace "webhook-3646-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.394 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":174,"skipped":2880,"failed":0}
SSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:57:39.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W0121 00:57:40.309785       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 21 00:57:40.310: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:57:40.310: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3924" for this suite.
•{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":175,"skipped":2884,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:57:40.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Starting the proxy
Jan 21 00:57:42.118: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix266330047/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:57:42.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3889" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path  [Conformance]","total":278,"completed":176,"skipped":2912,"failed":0}
SSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:57:42.812: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:57:43.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-7398" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":177,"skipped":2919,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:57:43.227: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-upd-0fe22043-fbf1-4b8b-88e4-84a1152689ab
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:57:57.705: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1982" for this suite.

• [SLOW TEST:14.489 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2943,"failed":0}
S
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:57:57.717: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:57:57.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-3486" for this suite.
•{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":179,"skipped":2944,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:57:57.977: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] custom resource defaulting for requests and from storage works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:57:58.037: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:57:59.409: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-16" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works  [Conformance]","total":278,"completed":180,"skipped":2963,"failed":0}
SSS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:57:59.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 21 00:58:10.239: INFO: Successfully updated pod "annotationupdatef5df63f5-ffb4-44e3-bd37-6e31693f54e7"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:58:12.292: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3500" for this suite.

• [SLOW TEST:12.877 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2966,"failed":0}
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:58:12.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Jan 21 00:58:12.507: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Jan 21 00:58:27.657: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:58:27.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3678" for this suite.

• [SLOW TEST:15.366 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2976,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:58:27.680: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:58:34.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-4886" for this suite.
STEP: Destroying namespace "nsdeletetest-6994" for this suite.
Jan 21 00:58:34.071: INFO: Namespace nsdeletetest-6994 was already deleted
STEP: Destroying namespace "nsdeletetest-379" for this suite.

• [SLOW TEST:6.397 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":183,"skipped":3043,"failed":0}
SSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:58:34.078: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:58:34.146: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-4762dab3-a57f-4e8d-8344-8532fbb251f4" in namespace "security-context-test-9304" to be "success or failure"
Jan 21 00:58:34.193: INFO: Pod "alpine-nnp-false-4762dab3-a57f-4e8d-8344-8532fbb251f4": Phase="Pending", Reason="", readiness=false. Elapsed: 47.379811ms
Jan 21 00:58:36.207: INFO: Pod "alpine-nnp-false-4762dab3-a57f-4e8d-8344-8532fbb251f4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060926056s
Jan 21 00:58:38.215: INFO: Pod "alpine-nnp-false-4762dab3-a57f-4e8d-8344-8532fbb251f4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069614365s
Jan 21 00:58:40.222: INFO: Pod "alpine-nnp-false-4762dab3-a57f-4e8d-8344-8532fbb251f4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076353896s
Jan 21 00:58:42.228: INFO: Pod "alpine-nnp-false-4762dab3-a57f-4e8d-8344-8532fbb251f4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08273997s
Jan 21 00:58:42.229: INFO: Pod "alpine-nnp-false-4762dab3-a57f-4e8d-8344-8532fbb251f4" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:58:42.245: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9304" for this suite.

• [SLOW TEST:8.181 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":184,"skipped":3052,"failed":0}
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:58:42.260: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 21 00:58:48.468: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:58:48.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-1027" for this suite.

• [SLOW TEST:6.354 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3052,"failed":0}
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:58:48.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6849.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-6849.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6849.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-6849.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-6849.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6849.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 21 00:58:58.982: INFO: DNS probes using dns-6849/dns-test-ffa3144d-5495-4f9a-a937-34b66861306f succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:58:59.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6849" for this suite.

• [SLOW TEST:10.595 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":186,"skipped":3074,"failed":0}
SSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:58:59.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0666 on tmpfs
Jan 21 00:58:59.392: INFO: Waiting up to 5m0s for pod "pod-af781577-b506-4b90-9336-de2941c384da" in namespace "emptydir-755" to be "success or failure"
Jan 21 00:58:59.396: INFO: Pod "pod-af781577-b506-4b90-9336-de2941c384da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.528461ms
Jan 21 00:59:01.402: INFO: Pod "pod-af781577-b506-4b90-9336-de2941c384da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010301685s
Jan 21 00:59:03.416: INFO: Pod "pod-af781577-b506-4b90-9336-de2941c384da": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023801362s
Jan 21 00:59:05.425: INFO: Pod "pod-af781577-b506-4b90-9336-de2941c384da": Phase="Pending", Reason="", readiness=false. Elapsed: 6.033343159s
Jan 21 00:59:07.433: INFO: Pod "pod-af781577-b506-4b90-9336-de2941c384da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.041579871s
STEP: Saw pod success
Jan 21 00:59:07.434: INFO: Pod "pod-af781577-b506-4b90-9336-de2941c384da" satisfied condition "success or failure"
Jan 21 00:59:07.437: INFO: Trying to get logs from node jerma-node pod pod-af781577-b506-4b90-9336-de2941c384da container test-container: 
STEP: delete the pod
Jan 21 00:59:07.495: INFO: Waiting for pod pod-af781577-b506-4b90-9336-de2941c384da to disappear
Jan 21 00:59:07.518: INFO: Pod pod-af781577-b506-4b90-9336-de2941c384da no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:59:07.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-755" for this suite.

• [SLOW TEST:8.354 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3077,"failed":0}
SSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:59:07.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-c2813acd-f85c-435c-ab21-1c0f718e3520
STEP: Creating a pod to test consume secrets
Jan 21 00:59:07.904: INFO: Waiting up to 5m0s for pod "pod-secrets-38233067-37e2-4d05-8b84-ad259e6a28e2" in namespace "secrets-2481" to be "success or failure"
Jan 21 00:59:07.928: INFO: Pod "pod-secrets-38233067-37e2-4d05-8b84-ad259e6a28e2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.829231ms
Jan 21 00:59:09.936: INFO: Pod "pod-secrets-38233067-37e2-4d05-8b84-ad259e6a28e2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032131856s
Jan 21 00:59:11.945: INFO: Pod "pod-secrets-38233067-37e2-4d05-8b84-ad259e6a28e2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041028204s
Jan 21 00:59:13.962: INFO: Pod "pod-secrets-38233067-37e2-4d05-8b84-ad259e6a28e2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.057788979s
STEP: Saw pod success
Jan 21 00:59:13.962: INFO: Pod "pod-secrets-38233067-37e2-4d05-8b84-ad259e6a28e2" satisfied condition "success or failure"
Jan 21 00:59:13.966: INFO: Trying to get logs from node jerma-node pod pod-secrets-38233067-37e2-4d05-8b84-ad259e6a28e2 container secret-volume-test: 
STEP: delete the pod
Jan 21 00:59:14.050: INFO: Waiting for pod pod-secrets-38233067-37e2-4d05-8b84-ad259e6a28e2 to disappear
Jan 21 00:59:14.094: INFO: Pod pod-secrets-38233067-37e2-4d05-8b84-ad259e6a28e2 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:59:14.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2481" for this suite.
STEP: Destroying namespace "secret-namespace-40" for this suite.

• [SLOW TEST:6.552 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3081,"failed":0}
S
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:59:14.118: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 00:59:14.268: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-b4881089-8290-4c20-941b-fefb0ddf55db" in namespace "security-context-test-2637" to be "success or failure"
Jan 21 00:59:14.291: INFO: Pod "busybox-readonly-false-b4881089-8290-4c20-941b-fefb0ddf55db": Phase="Pending", Reason="", readiness=false. Elapsed: 21.900698ms
Jan 21 00:59:16.303: INFO: Pod "busybox-readonly-false-b4881089-8290-4c20-941b-fefb0ddf55db": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034533287s
Jan 21 00:59:18.312: INFO: Pod "busybox-readonly-false-b4881089-8290-4c20-941b-fefb0ddf55db": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042733653s
Jan 21 00:59:20.318: INFO: Pod "busybox-readonly-false-b4881089-8290-4c20-941b-fefb0ddf55db": Phase="Pending", Reason="", readiness=false. Elapsed: 6.049622229s
Jan 21 00:59:22.327: INFO: Pod "busybox-readonly-false-b4881089-8290-4c20-941b-fefb0ddf55db": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.058622063s
Jan 21 00:59:22.328: INFO: Pod "busybox-readonly-false-b4881089-8290-4c20-941b-fefb0ddf55db" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:59:22.328: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2637" for this suite.

• [SLOW TEST:8.224 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3082,"failed":0}
SSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:59:22.343: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:331
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a replication controller
Jan 21 00:59:22.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2076'
Jan 21 00:59:22.936: INFO: stderr: ""
Jan 21 00:59:22.936: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Jan 21 00:59:22.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2076'
Jan 21 00:59:23.124: INFO: stderr: ""
Jan 21 00:59:23.124: INFO: stdout: "update-demo-nautilus-4n7gh update-demo-nautilus-g9rl4 "
Jan 21 00:59:23.124: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n7gh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2076'
Jan 21 00:59:23.313: INFO: stderr: ""
Jan 21 00:59:23.313: INFO: stdout: ""
Jan 21 00:59:23.313: INFO: update-demo-nautilus-4n7gh is created but not running
Jan 21 00:59:28.314: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2076'
Jan 21 00:59:28.861: INFO: stderr: ""
Jan 21 00:59:28.861: INFO: stdout: "update-demo-nautilus-4n7gh update-demo-nautilus-g9rl4 "
Jan 21 00:59:28.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n7gh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2076'
Jan 21 00:59:29.505: INFO: stderr: ""
Jan 21 00:59:29.505: INFO: stdout: ""
Jan 21 00:59:29.505: INFO: update-demo-nautilus-4n7gh is created but not running
Jan 21 00:59:34.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-2076'
Jan 21 00:59:34.697: INFO: stderr: ""
Jan 21 00:59:34.697: INFO: stdout: "update-demo-nautilus-4n7gh update-demo-nautilus-g9rl4 "
Jan 21 00:59:34.698: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n7gh -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2076'
Jan 21 00:59:34.835: INFO: stderr: ""
Jan 21 00:59:34.835: INFO: stdout: "true"
Jan 21 00:59:34.835: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4n7gh -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2076'
Jan 21 00:59:34.992: INFO: stderr: ""
Jan 21 00:59:34.992: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 21 00:59:34.992: INFO: validating pod update-demo-nautilus-4n7gh
Jan 21 00:59:34.997: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 21 00:59:34.997: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 21 00:59:34.997: INFO: update-demo-nautilus-4n7gh is verified up and running
Jan 21 00:59:34.997: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g9rl4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2076'
Jan 21 00:59:35.084: INFO: stderr: ""
Jan 21 00:59:35.084: INFO: stdout: "true"
Jan 21 00:59:35.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-g9rl4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2076'
Jan 21 00:59:35.227: INFO: stderr: ""
Jan 21 00:59:35.227: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Jan 21 00:59:35.227: INFO: validating pod update-demo-nautilus-g9rl4
Jan 21 00:59:35.236: INFO: got data: {
  "image": "nautilus.jpg"
}

Jan 21 00:59:35.237: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Jan 21 00:59:35.237: INFO: update-demo-nautilus-g9rl4 is verified up and running
STEP: using delete to clean up resources
Jan 21 00:59:35.237: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2076'
Jan 21 00:59:35.356: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Jan 21 00:59:35.357: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Jan 21 00:59:35.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2076'
Jan 21 00:59:35.482: INFO: stderr: "No resources found in kubectl-2076 namespace.\n"
Jan 21 00:59:35.482: INFO: stdout: ""
Jan 21 00:59:35.482: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2076 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 21 00:59:35.594: INFO: stderr: ""
Jan 21 00:59:35.594: INFO: stdout: "update-demo-nautilus-4n7gh\nupdate-demo-nautilus-g9rl4\n"
Jan 21 00:59:36.096: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-2076'
Jan 21 00:59:36.952: INFO: stderr: "No resources found in kubectl-2076 namespace.\n"
Jan 21 00:59:36.952: INFO: stdout: ""
Jan 21 00:59:36.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-2076 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Jan 21 00:59:37.187: INFO: stderr: ""
Jan 21 00:59:37.187: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:59:37.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2076" for this suite.

• [SLOW TEST:14.861 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:329
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller  [Conformance]","total":278,"completed":190,"skipped":3087,"failed":0}
SSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:59:37.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-28ef71e6-2476-4958-aeba-60ab699dc083
STEP: Creating a pod to test consume configMaps
Jan 21 00:59:37.867: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-8ce369ac-d35f-41f1-b022-df8e682d6e7c" in namespace "projected-5140" to be "success or failure"
Jan 21 00:59:37.977: INFO: Pod "pod-projected-configmaps-8ce369ac-d35f-41f1-b022-df8e682d6e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 109.794529ms
Jan 21 00:59:40.068: INFO: Pod "pod-projected-configmaps-8ce369ac-d35f-41f1-b022-df8e682d6e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.200774135s
Jan 21 00:59:42.076: INFO: Pod "pod-projected-configmaps-8ce369ac-d35f-41f1-b022-df8e682d6e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.208735367s
Jan 21 00:59:44.084: INFO: Pod "pod-projected-configmaps-8ce369ac-d35f-41f1-b022-df8e682d6e7c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.216670009s
Jan 21 00:59:46.090: INFO: Pod "pod-projected-configmaps-8ce369ac-d35f-41f1-b022-df8e682d6e7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.222952071s
STEP: Saw pod success
Jan 21 00:59:46.090: INFO: Pod "pod-projected-configmaps-8ce369ac-d35f-41f1-b022-df8e682d6e7c" satisfied condition "success or failure"
Jan 21 00:59:46.094: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-8ce369ac-d35f-41f1-b022-df8e682d6e7c container projected-configmap-volume-test: 
STEP: delete the pod
Jan 21 00:59:46.196: INFO: Waiting for pod pod-projected-configmaps-8ce369ac-d35f-41f1-b022-df8e682d6e7c to disappear
Jan 21 00:59:46.209: INFO: Pod pod-projected-configmaps-8ce369ac-d35f-41f1-b022-df8e682d6e7c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 00:59:46.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5140" for this suite.

• [SLOW TEST:9.014 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3090,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 00:59:46.222: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-9966
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 21 00:59:46.286: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 21 01:00:20.753: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9966 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 01:00:20.754: INFO: >>> kubeConfig: /root/.kube/config
I0121 01:00:20.820913       8 log.go:172] (0xc00490a420) (0xc001523ae0) Create stream
I0121 01:00:20.821485       8 log.go:172] (0xc00490a420) (0xc001523ae0) Stream added, broadcasting: 1
I0121 01:00:20.831496       8 log.go:172] (0xc00490a420) Reply frame received for 1
I0121 01:00:20.831557       8 log.go:172] (0xc00490a420) (0xc000f0ec80) Create stream
I0121 01:00:20.831572       8 log.go:172] (0xc00490a420) (0xc000f0ec80) Stream added, broadcasting: 3
I0121 01:00:20.833131       8 log.go:172] (0xc00490a420) Reply frame received for 3
I0121 01:00:20.833170       8 log.go:172] (0xc00490a420) (0xc000f0efa0) Create stream
I0121 01:00:20.833192       8 log.go:172] (0xc00490a420) (0xc000f0efa0) Stream added, broadcasting: 5
I0121 01:00:20.835374       8 log.go:172] (0xc00490a420) Reply frame received for 5
I0121 01:00:21.948690       8 log.go:172] (0xc00490a420) Data frame received for 3
I0121 01:00:21.949064       8 log.go:172] (0xc000f0ec80) (3) Data frame handling
I0121 01:00:21.949229       8 log.go:172] (0xc000f0ec80) (3) Data frame sent
I0121 01:00:22.032076       8 log.go:172] (0xc00490a420) (0xc000f0ec80) Stream removed, broadcasting: 3
I0121 01:00:22.032178       8 log.go:172] (0xc00490a420) Data frame received for 1
I0121 01:00:22.032244       8 log.go:172] (0xc00490a420) (0xc000f0efa0) Stream removed, broadcasting: 5
I0121 01:00:22.032337       8 log.go:172] (0xc001523ae0) (1) Data frame handling
I0121 01:00:22.032392       8 log.go:172] (0xc001523ae0) (1) Data frame sent
I0121 01:00:22.032415       8 log.go:172] (0xc00490a420) (0xc001523ae0) Stream removed, broadcasting: 1
I0121 01:00:22.032643       8 log.go:172] (0xc00490a420) Go away received
I0121 01:00:22.032843       8 log.go:172] (0xc00490a420) (0xc001523ae0) Stream removed, broadcasting: 1
I0121 01:00:22.032898       8 log.go:172] (0xc00490a420) (0xc000f0ec80) Stream removed, broadcasting: 3
I0121 01:00:22.032949       8 log.go:172] (0xc00490a420) (0xc000f0efa0) Stream removed, broadcasting: 5
Jan 21 01:00:22.033: INFO: Found all expected endpoints: [netserver-0]
Jan 21 01:00:22.041: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-9966 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 01:00:22.041: INFO: >>> kubeConfig: /root/.kube/config
I0121 01:00:22.107901       8 log.go:172] (0xc002c373f0) (0xc000495680) Create stream
I0121 01:00:22.108216       8 log.go:172] (0xc002c373f0) (0xc000495680) Stream added, broadcasting: 1
I0121 01:00:22.118403       8 log.go:172] (0xc002c373f0) Reply frame received for 1
I0121 01:00:22.118647       8 log.go:172] (0xc002c373f0) (0xc000f0f4a0) Create stream
I0121 01:00:22.118672       8 log.go:172] (0xc002c373f0) (0xc000f0f4a0) Stream added, broadcasting: 3
I0121 01:00:22.122127       8 log.go:172] (0xc002c373f0) Reply frame received for 3
I0121 01:00:22.122156       8 log.go:172] (0xc002c373f0) (0xc0004957c0) Create stream
I0121 01:00:22.122169       8 log.go:172] (0xc002c373f0) (0xc0004957c0) Stream added, broadcasting: 5
I0121 01:00:22.123743       8 log.go:172] (0xc002c373f0) Reply frame received for 5
I0121 01:00:23.218177       8 log.go:172] (0xc002c373f0) Data frame received for 3
I0121 01:00:23.218483       8 log.go:172] (0xc000f0f4a0) (3) Data frame handling
I0121 01:00:23.218633       8 log.go:172] (0xc000f0f4a0) (3) Data frame sent
I0121 01:00:23.327421       8 log.go:172] (0xc002c373f0) (0xc000f0f4a0) Stream removed, broadcasting: 3
I0121 01:00:23.327604       8 log.go:172] (0xc002c373f0) Data frame received for 1
I0121 01:00:23.327645       8 log.go:172] (0xc000495680) (1) Data frame handling
I0121 01:00:23.327671       8 log.go:172] (0xc000495680) (1) Data frame sent
I0121 01:00:23.327692       8 log.go:172] (0xc002c373f0) (0xc000495680) Stream removed, broadcasting: 1
I0121 01:00:23.328152       8 log.go:172] (0xc002c373f0) (0xc0004957c0) Stream removed, broadcasting: 5
I0121 01:00:23.328378       8 log.go:172] (0xc002c373f0) Go away received
I0121 01:00:23.328601       8 log.go:172] (0xc002c373f0) (0xc000495680) Stream removed, broadcasting: 1
I0121 01:00:23.328732       8 log.go:172] (0xc002c373f0) (0xc000f0f4a0) Stream removed, broadcasting: 3
I0121 01:00:23.328746       8 log.go:172] (0xc002c373f0) (0xc0004957c0) Stream removed, broadcasting: 5
Jan 21 01:00:23.328: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:00:23.329: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9966" for this suite.

• [SLOW TEST:37.125 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3164,"failed":0}
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:00:23.348: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 21 01:00:24.649: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 21 01:00:26.669: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:00:28.724: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:00:30.787: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:00:32.674: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:00:34.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165224, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 21 01:00:37.733: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:00:37.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-4523" for this suite.
STEP: Destroying namespace "webhook-4523-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:14.875 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":193,"skipped":3173,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:00:38.224: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-814b24c8-61fc-4b03-ad4f-51f5cffef85e
STEP: Creating a pod to test consume configMaps
Jan 21 01:00:38.408: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b383b64b-ee62-4a5d-a3f5-4393c7926ee0" in namespace "projected-1578" to be "success or failure"
Jan 21 01:00:38.430: INFO: Pod "pod-projected-configmaps-b383b64b-ee62-4a5d-a3f5-4393c7926ee0": Phase="Pending", Reason="", readiness=false. Elapsed: 22.655129ms
Jan 21 01:00:40.443: INFO: Pod "pod-projected-configmaps-b383b64b-ee62-4a5d-a3f5-4393c7926ee0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03499851s
Jan 21 01:00:42.451: INFO: Pod "pod-projected-configmaps-b383b64b-ee62-4a5d-a3f5-4393c7926ee0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043602158s
Jan 21 01:00:44.464: INFO: Pod "pod-projected-configmaps-b383b64b-ee62-4a5d-a3f5-4393c7926ee0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056582751s
Jan 21 01:00:46.476: INFO: Pod "pod-projected-configmaps-b383b64b-ee62-4a5d-a3f5-4393c7926ee0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067834548s
Jan 21 01:00:48.487: INFO: Pod "pod-projected-configmaps-b383b64b-ee62-4a5d-a3f5-4393c7926ee0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.078877285s
Jan 21 01:00:50.498: INFO: Pod "pod-projected-configmaps-b383b64b-ee62-4a5d-a3f5-4393c7926ee0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.089944157s
STEP: Saw pod success
Jan 21 01:00:50.498: INFO: Pod "pod-projected-configmaps-b383b64b-ee62-4a5d-a3f5-4393c7926ee0" satisfied condition "success or failure"
Jan 21 01:00:50.503: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-b383b64b-ee62-4a5d-a3f5-4393c7926ee0 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 21 01:00:50.574: INFO: Waiting for pod pod-projected-configmaps-b383b64b-ee62-4a5d-a3f5-4393c7926ee0 to disappear
Jan 21 01:00:50.580: INFO: Pod pod-projected-configmaps-b383b64b-ee62-4a5d-a3f5-4393c7926ee0 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:00:50.580: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1578" for this suite.

• [SLOW TEST:12.370 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":194,"skipped":3185,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:00:50.598: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5062 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5062;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5062 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5062;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5062.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-5062.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5062.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-5062.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5062.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5062.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5062.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-5062.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5062.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-5062.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5062.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 10.195.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.195.10_udp@PTR;check="$$(dig +tcp +noall +answer +search 10.195.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.195.10_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5062 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5062;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5062 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5062;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-5062.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-5062.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-5062.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-5062.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-5062.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-5062.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-5062.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-5062.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-5062.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-5062.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-5062.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5062.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 10.195.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.195.10_udp@PTR;check="$$(dig +tcp +noall +answer +search 10.195.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.195.10_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 21 01:00:58.973: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:00:58.978: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:00:58.984: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:00:58.989: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:00:58.993: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:00:59.000: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:00:59.006: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:00:59.018: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:00:59.071: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:00:59.074: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:00:59.077: INFO: Unable to read jessie_udp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:00:59.081: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:00:59.087: INFO: Unable to read jessie_udp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:00:59.091: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:00:59.095: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:00:59.099: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:00:59.120: INFO: Lookups using dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5062 wheezy_tcp@dns-test-service.dns-5062 wheezy_udp@dns-test-service.dns-5062.svc wheezy_tcp@dns-test-service.dns-5062.svc wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5062 jessie_tcp@dns-test-service.dns-5062 jessie_udp@dns-test-service.dns-5062.svc jessie_tcp@dns-test-service.dns-5062.svc jessie_udp@_http._tcp.dns-test-service.dns-5062.svc jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc]

Jan 21 01:01:04.137: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:04.144: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:04.149: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:04.161: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:04.167: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:04.175: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:04.178: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:04.187: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:04.220: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:04.224: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:04.230: INFO: Unable to read jessie_udp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:04.236: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:04.241: INFO: Unable to read jessie_udp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:04.245: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:04.249: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:04.254: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:04.352: INFO: Lookups using dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5062 wheezy_tcp@dns-test-service.dns-5062 wheezy_udp@dns-test-service.dns-5062.svc wheezy_tcp@dns-test-service.dns-5062.svc wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5062 jessie_tcp@dns-test-service.dns-5062 jessie_udp@dns-test-service.dns-5062.svc jessie_tcp@dns-test-service.dns-5062.svc jessie_udp@_http._tcp.dns-test-service.dns-5062.svc jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc]

Jan 21 01:01:09.132: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:09.140: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:09.149: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:09.156: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:09.165: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:09.169: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:09.173: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:09.177: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:09.229: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:09.234: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:09.239: INFO: Unable to read jessie_udp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:09.243: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:09.248: INFO: Unable to read jessie_udp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:09.251: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:09.256: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:09.260: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:09.295: INFO: Lookups using dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5062 wheezy_tcp@dns-test-service.dns-5062 wheezy_udp@dns-test-service.dns-5062.svc wheezy_tcp@dns-test-service.dns-5062.svc wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5062 jessie_tcp@dns-test-service.dns-5062 jessie_udp@dns-test-service.dns-5062.svc jessie_tcp@dns-test-service.dns-5062.svc jessie_udp@_http._tcp.dns-test-service.dns-5062.svc jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc]

Jan 21 01:01:14.131: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:14.137: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:14.142: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:14.150: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:14.156: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:14.161: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:14.166: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:14.171: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:14.240: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:14.248: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:14.253: INFO: Unable to read jessie_udp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:14.257: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:14.261: INFO: Unable to read jessie_udp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:14.266: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:14.269: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:14.276: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:14.305: INFO: Lookups using dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5062 wheezy_tcp@dns-test-service.dns-5062 wheezy_udp@dns-test-service.dns-5062.svc wheezy_tcp@dns-test-service.dns-5062.svc wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5062 jessie_tcp@dns-test-service.dns-5062 jessie_udp@dns-test-service.dns-5062.svc jessie_tcp@dns-test-service.dns-5062.svc jessie_udp@_http._tcp.dns-test-service.dns-5062.svc jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc]

Jan 21 01:01:19.136: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:19.143: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:19.151: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:19.157: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:19.161: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:19.166: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:19.171: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:19.176: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:19.230: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:19.237: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:19.241: INFO: Unable to read jessie_udp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:19.244: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:19.247: INFO: Unable to read jessie_udp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:19.252: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:19.257: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:19.261: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:19.297: INFO: Lookups using dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5062 wheezy_tcp@dns-test-service.dns-5062 wheezy_udp@dns-test-service.dns-5062.svc wheezy_tcp@dns-test-service.dns-5062.svc wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5062 jessie_tcp@dns-test-service.dns-5062 jessie_udp@dns-test-service.dns-5062.svc jessie_tcp@dns-test-service.dns-5062.svc jessie_udp@_http._tcp.dns-test-service.dns-5062.svc jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc]

Jan 21 01:01:24.133: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:24.138: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:24.141: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:24.144: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:24.147: INFO: Unable to read wheezy_udp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:24.151: INFO: Unable to read wheezy_tcp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:24.179: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:24.183: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:24.206: INFO: Unable to read jessie_udp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:24.209: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:24.213: INFO: Unable to read jessie_udp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:24.216: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062 from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:24.218: INFO: Unable to read jessie_udp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:24.221: INFO: Unable to read jessie_tcp@dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:24.224: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:24.227: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:24.242: INFO: Lookups using dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-5062 wheezy_tcp@dns-test-service.dns-5062 wheezy_udp@dns-test-service.dns-5062.svc wheezy_tcp@dns-test-service.dns-5062.svc wheezy_udp@_http._tcp.dns-test-service.dns-5062.svc wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-5062 jessie_tcp@dns-test-service.dns-5062 jessie_udp@dns-test-service.dns-5062.svc jessie_tcp@dns-test-service.dns-5062.svc jessie_udp@_http._tcp.dns-test-service.dns-5062.svc jessie_tcp@_http._tcp.dns-test-service.dns-5062.svc]

Jan 21 01:01:29.190: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:29.299: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-5062.svc from pod dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3: the server could not find the requested resource (get pods dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3)
Jan 21 01:01:29.348: INFO: Lookups using dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3 failed for: [wheezy_tcp@_http._tcp.dns-test-service.dns-5062.svc jessie_udp@_http._tcp.dns-test-service.dns-5062.svc]

Jan 21 01:01:34.298: INFO: DNS probes using dns-5062/dns-test-7d33a741-57d8-4855-b25c-cbfc29a22bf3 succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:01:34.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-5062" for this suite.

• [SLOW TEST:44.066 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":195,"skipped":3245,"failed":0}
SSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:01:34.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-ab80a87c-8130-4440-b213-7a5066f82426
STEP: Creating a pod to test consume configMaps
Jan 21 01:01:34.969: INFO: Waiting up to 5m0s for pod "pod-configmaps-c2bbc7e4-cb1d-4911-89f4-f0fb584f2e55" in namespace "configmap-2653" to be "success or failure"
Jan 21 01:01:34.990: INFO: Pod "pod-configmaps-c2bbc7e4-cb1d-4911-89f4-f0fb584f2e55": Phase="Pending", Reason="", readiness=false. Elapsed: 20.528696ms
Jan 21 01:01:36.999: INFO: Pod "pod-configmaps-c2bbc7e4-cb1d-4911-89f4-f0fb584f2e55": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03004431s
Jan 21 01:01:39.009: INFO: Pod "pod-configmaps-c2bbc7e4-cb1d-4911-89f4-f0fb584f2e55": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03996096s
Jan 21 01:01:41.015: INFO: Pod "pod-configmaps-c2bbc7e4-cb1d-4911-89f4-f0fb584f2e55": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046064918s
Jan 21 01:01:43.022: INFO: Pod "pod-configmaps-c2bbc7e4-cb1d-4911-89f4-f0fb584f2e55": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05269132s
Jan 21 01:01:45.032: INFO: Pod "pod-configmaps-c2bbc7e4-cb1d-4911-89f4-f0fb584f2e55": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.062448605s
STEP: Saw pod success
Jan 21 01:01:45.032: INFO: Pod "pod-configmaps-c2bbc7e4-cb1d-4911-89f4-f0fb584f2e55" satisfied condition "success or failure"
Jan 21 01:01:45.037: INFO: Trying to get logs from node jerma-node pod pod-configmaps-c2bbc7e4-cb1d-4911-89f4-f0fb584f2e55 container configmap-volume-test: 
STEP: delete the pod
Jan 21 01:01:45.185: INFO: Waiting for pod pod-configmaps-c2bbc7e4-cb1d-4911-89f4-f0fb584f2e55 to disappear
Jan 21 01:01:45.192: INFO: Pod pod-configmaps-c2bbc7e4-cb1d-4911-89f4-f0fb584f2e55 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:01:45.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2653" for this suite.

• [SLOW TEST:10.539 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":196,"skipped":3254,"failed":0}
SSSSSSSSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:01:45.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap configmap-189/configmap-test-a0246ddc-86ed-4db6-9352-456624d8ef91
STEP: Creating a pod to test consume configMaps
Jan 21 01:01:45.377: INFO: Waiting up to 5m0s for pod "pod-configmaps-d05ac054-29ea-488f-a3be-a80cd1fcb321" in namespace "configmap-189" to be "success or failure"
Jan 21 01:01:45.388: INFO: Pod "pod-configmaps-d05ac054-29ea-488f-a3be-a80cd1fcb321": Phase="Pending", Reason="", readiness=false. Elapsed: 10.394324ms
Jan 21 01:01:47.397: INFO: Pod "pod-configmaps-d05ac054-29ea-488f-a3be-a80cd1fcb321": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019324529s
Jan 21 01:01:49.404: INFO: Pod "pod-configmaps-d05ac054-29ea-488f-a3be-a80cd1fcb321": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026034729s
Jan 21 01:01:51.410: INFO: Pod "pod-configmaps-d05ac054-29ea-488f-a3be-a80cd1fcb321": Phase="Pending", Reason="", readiness=false. Elapsed: 6.03250902s
Jan 21 01:01:53.416: INFO: Pod "pod-configmaps-d05ac054-29ea-488f-a3be-a80cd1fcb321": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.038509652s
STEP: Saw pod success
Jan 21 01:01:53.416: INFO: Pod "pod-configmaps-d05ac054-29ea-488f-a3be-a80cd1fcb321" satisfied condition "success or failure"
Jan 21 01:01:53.420: INFO: Trying to get logs from node jerma-node pod pod-configmaps-d05ac054-29ea-488f-a3be-a80cd1fcb321 container env-test: 
STEP: delete the pod
Jan 21 01:01:53.481: INFO: Waiting for pod pod-configmaps-d05ac054-29ea-488f-a3be-a80cd1fcb321 to disappear
Jan 21 01:01:53.485: INFO: Pod pod-configmaps-d05ac054-29ea-488f-a3be-a80cd1fcb321 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:01:53.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-189" for this suite.

• [SLOW TEST:8.296 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3266,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:01:53.501: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 01:01:53.600: INFO: Creating deployment "webserver-deployment"
Jan 21 01:01:53.608: INFO: Waiting for observed generation 1
Jan 21 01:01:56.723: INFO: Waiting for all required pods to come up
Jan 21 01:01:57.401: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Jan 21 01:02:19.719: INFO: Waiting for deployment "webserver-deployment" to complete
Jan 21 01:02:19.729: INFO: Updating deployment "webserver-deployment" with a non-existent image
Jan 21 01:02:19.737: INFO: Updating deployment webserver-deployment
Jan 21 01:02:19.737: INFO: Waiting for observed generation 2
Jan 21 01:02:22.267: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Jan 21 01:02:22.654: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Jan 21 01:02:22.823: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 21 01:02:22.871: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Jan 21 01:02:22.871: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Jan 21 01:02:22.876: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Jan 21 01:02:22.881: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Jan 21 01:02:22.881: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Jan 21 01:02:22.888: INFO: Updating deployment webserver-deployment
Jan 21 01:02:22.888: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Jan 21 01:02:23.805: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Jan 21 01:02:27.640: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:67
Jan 21 01:02:31.980: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-1023 /apis/apps/v1/namespaces/deployment-1023/deployments/webserver-deployment 4c775a9d-2b08-4567-8131-9f86b18cf3ab 3305586 3 2020-01-21 01:01:53 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0038f2ac8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-01-21 01:02:23 +0000 UTC,LastTransitionTime:2020-01-21 01:02:23 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-01-21 01:02:27 +0000 UTC,LastTransitionTime:2020-01-21 01:01:53 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Jan 21 01:02:32.052: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-1023 /apis/apps/v1/namespaces/deployment-1023/replicasets/webserver-deployment-c7997dcc8 6b8c4e7c-0615-4aa8-8a73-7a22449fe860 3305583 3 2020-01-21 01:02:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 4c775a9d-2b08-4567-8131-9f86b18cf3ab 0xc0021dcaa7 0xc0021dcaa8}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0021dcb18  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 21 01:02:32.052: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Jan 21 01:02:32.052: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-1023 /apis/apps/v1/namespaces/deployment-1023/replicasets/webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 3305571 3 2020-01-21 01:01:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 4c775a9d-2b08-4567-8131-9f86b18cf3ab 0xc0021dc9e7 0xc0021dc9e8}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0021dca48  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Jan 21 01:02:33.088: INFO: Pod "webserver-deployment-595b5b9587-29rv7" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-29rv7 webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-29rv7 19216c71-3377-4b24-9073-36f14d7b1798 3305587 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc0038f2f27 0xc0038f2f28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-21 01:02:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.089: INFO: Pod "webserver-deployment-595b5b9587-55n64" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-55n64 webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-55n64 b32424ef-c619-4fda-9391-574ff71ea9d0 3305405 0 2020-01-21 01:01:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc0038f3087 0xc0038f3088}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:01:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:01:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.1,StartTime:2020-01-21 01:01:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-21 01:02:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://f2862e1d9184375fd305d718fd8822afd7abdb5dd0744865570d89cc36dcb159,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.090: INFO: Pod "webserver-deployment-595b5b9587-6ppzk" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-6ppzk webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-6ppzk 2e5c5d89-f50f-42ed-a45a-87bb697240fd 3305430 0 2020-01-21 01:01:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc0038f3200 0xc0038f3201}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:01:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:01:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.8,StartTime:2020-01-21 01:01:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-21 01:02:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://dcc83ea14e2bc2ea08269bf27ef821a9e1480eac14448b5c658f62c5966f86d5,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.090: INFO: Pod "webserver-deployment-595b5b9587-7gbfb" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7gbfb webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-7gbfb 256bd897-bfaa-4693-be0f-9e4c09c56153 3305412 0 2020-01-21 01:01:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc0038f3360 0xc0038f3361}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:01:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:01:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.4,StartTime:2020-01-21 01:01:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-21 01:02:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://4d357546a2c26adb5a024bcc2912b020f23649cdb60367681a477ba528e2ef1d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.091: INFO: Pod "webserver-deployment-595b5b9587-7pzj8" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7pzj8 webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-7pzj8 eb03e0b4-8fcd-4b2a-8716-d78f1c23b5c3 3305568 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc0038f34d0 0xc0038f34d1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.092: INFO: Pod "webserver-deployment-595b5b9587-96d4b" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-96d4b webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-96d4b c1b80b90-89be-414a-8f3b-1783126f76f6 3305439 0 2020-01-21 01:01:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc0038f35e7 0xc0038f35e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:01:54 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:01:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.4,StartTime:2020-01-21 01:01:54 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-21 01:02:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://c2ab5fc1c596281e6cb8d6564fb306575f0df42ffe4e1edf157b61374f77cbcf,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.092: INFO: Pod "webserver-deployment-595b5b9587-bb5c4" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bb5c4 webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-bb5c4 6db8a2a3-a0fb-46fb-9074-7b162f4063bc 3305554 0 2020-01-21 01:02:22 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc0038f3750 0xc0038f3751}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:23 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-21 01:02:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.093: INFO: Pod "webserver-deployment-595b5b9587-bls2d" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-bls2d webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-bls2d 91fd0652-cb95-494e-a674-f8f5c3848b20 3305545 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc0038f38a7 0xc0038f38a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.093: INFO: Pod "webserver-deployment-595b5b9587-c6nnh" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-c6nnh webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-c6nnh 107b1293-4000-4443-b774-6f59fa5de45b 3305565 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc0038f39b7 0xc0038f39b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.094: INFO: Pod "webserver-deployment-595b5b9587-chzp5" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-chzp5 webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-chzp5 fc10f60d-d566-4655-8ca3-85330884c2be 3305550 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc0038f3ac7 0xc0038f3ac8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.095: INFO: Pod "webserver-deployment-595b5b9587-fmx52" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-fmx52 webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-fmx52 b9ec4b31-9070-4eaf-94af-192588965a55 3305427 0 2020-01-21 01:01:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc0038f3be7 0xc0038f3be8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:01:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:01:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.5,StartTime:2020-01-21 01:01:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-21 01:02:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://af693d2b51fbc5cedfb081de72d662673578de7d75f56d638d86213907ddb7d2,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.095: INFO: Pod "webserver-deployment-595b5b9587-j2kkc" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-j2kkc webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-j2kkc 39f9e365-8902-4428-a978-c5198080c539 3305408 0 2020-01-21 01:01:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc0038f3d50 0xc0038f3d51}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:01:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:01:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.5,StartTime:2020-01-21 01:01:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-21 01:02:11 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://cf08839310cfef161698647b8b9cfb21f8d3f1ad58986b21e761a8f69e070309,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.096: INFO: Pod "webserver-deployment-595b5b9587-kwt8j" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kwt8j webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-kwt8j b7c0ecbc-5905-411c-b660-20a6586d65df 3305597 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc0038f3ec0 0xc0038f3ec1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-21 01:02:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.096: INFO: Pod "webserver-deployment-595b5b9587-mvbgt" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-mvbgt webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-mvbgt 99070b0b-e5f3-40fc-afc7-952bd9335fcc 3305592 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc004a84017 0xc004a84018}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-21 01:02:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.097: INFO: Pod "webserver-deployment-595b5b9587-q94jg" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-q94jg webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-q94jg 8806966b-1f09-4fcd-9ab4-c5671c6f3a58 3305548 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc004a84347 0xc004a84348}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.097: INFO: Pod "webserver-deployment-595b5b9587-r7z7x" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-r7z7x webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-r7z7x 28d8bfee-8f07-45a4-9b5d-9c9055d46683 3305546 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc004a84507 0xc004a84508}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.098: INFO: Pod "webserver-deployment-595b5b9587-rstfd" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rstfd webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-rstfd 881f1e71-6428-47a3-b309-dec150e25dda 3305436 0 2020-01-21 01:01:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc004a846b7 0xc004a846b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:01:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:01:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.7,StartTime:2020-01-21 01:01:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-21 01:02:17 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://3b488323944da2b6a2b5f14e949b64e2edb66c5db82ced4214277f96ff789389,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.098: INFO: Pod "webserver-deployment-595b5b9587-tm8kw" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-tm8kw webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-tm8kw 43261da2-54f4-42d2-8711-81fd38d0acbf 3305433 0 2020-01-21 01:01:53 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc004a84830 0xc004a84831}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:01:53 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:01:53 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:10.32.0.6,StartTime:2020-01-21 01:01:53 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-21 01:02:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://73100a3586e8538994c920e59beafbf7a457bbdf3bd3b8eabd8eea8e3b13a0f3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.099: INFO: Pod "webserver-deployment-595b5b9587-wlxxj" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wlxxj webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-wlxxj b00c3cf8-327e-464c-a4b1-d0ed4b8c175c 3305584 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc004a849a0 0xc004a849a1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-21 01:02:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.099: INFO: Pod "webserver-deployment-595b5b9587-zl6w8" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-zl6w8 webserver-deployment-595b5b9587- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-595b5b9587-zl6w8 2d34618a-6b34-4e3f-b729-f7f6ebd54859 3305561 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 f43ab17f-24a6-4cd8-8fd3-8a2010758133 0xc004a84b17 0xc004a84b18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.100: INFO: Pod "webserver-deployment-c7997dcc8-6q8dk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6q8dk webserver-deployment-c7997dcc8- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-c7997dcc8-6q8dk e39c3348-bc1a-4b08-ab2b-8c971b013018 3305500 0 2020-01-21 01:02:20 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6b8c4e7c-0615-4aa8-8a73-7a22449fe860 0xc004a84c27 0xc004a84c28}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:21 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:21 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-21 01:02:21 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.100: INFO: Pod "webserver-deployment-c7997dcc8-6tm9z" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-6tm9z webserver-deployment-c7997dcc8- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-c7997dcc8-6tm9z 25b568be-ad46-4857-83ab-b28430cfe084 3305559 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6b8c4e7c-0615-4aa8-8a73-7a22449fe860 0xc004a84dd7 0xc004a84dd8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.102: INFO: Pod "webserver-deployment-c7997dcc8-8p4h2" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8p4h2 webserver-deployment-c7997dcc8- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-c7997dcc8-8p4h2 53d5cd37-2cac-4b7a-ba68-609eef407065 3305493 0 2020-01-21 01:02:20 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6b8c4e7c-0615-4aa8-8a73-7a22449fe860 0xc004a84f37 0xc004a84f38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-21 01:02:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.103: INFO: Pod "webserver-deployment-c7997dcc8-fk6dg" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fk6dg webserver-deployment-c7997dcc8- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-c7997dcc8-fk6dg 382311fe-27de-4640-bbae-498408ac2844 3305471 0 2020-01-21 01:02:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6b8c4e7c-0615-4aa8-8a73-7a22449fe860 0xc004a850b7 0xc004a850b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-21 01:02:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.103: INFO: Pod "webserver-deployment-c7997dcc8-fwvst" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fwvst webserver-deployment-c7997dcc8- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-c7997dcc8-fwvst 5a47a037-6729-4206-85fa-d09d312937f6 3305472 0 2020-01-21 01:02:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6b8c4e7c-0615-4aa8-8a73-7a22449fe860 0xc004a85247 0xc004a85248}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:20 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-21 01:02:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.103: INFO: Pod "webserver-deployment-c7997dcc8-lsgwz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-lsgwz webserver-deployment-c7997dcc8- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-c7997dcc8-lsgwz af4a9054-a451-4244-87b7-b3bad7544f69 3305570 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6b8c4e7c-0615-4aa8-8a73-7a22449fe860 0xc004a853c7 0xc004a853c8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-21 01:02:24 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.104: INFO: Pod "webserver-deployment-c7997dcc8-mcgzc" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mcgzc webserver-deployment-c7997dcc8- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-c7997dcc8-mcgzc dd226b2c-3cb5-42fe-b473-6252e0b3cd30 3305549 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6b8c4e7c-0615-4aa8-8a73-7a22449fe860 0xc004a85537 0xc004a85538}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.104: INFO: Pod "webserver-deployment-c7997dcc8-qwbdw" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-qwbdw webserver-deployment-c7997dcc8- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-c7997dcc8-qwbdw 16b408f0-a3cf-40a2-ba0e-a313e5be0c58 3305572 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6b8c4e7c-0615-4aa8-8a73-7a22449fe860 0xc004a85667 0xc004a85668}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.105: INFO: Pod "webserver-deployment-c7997dcc8-rvxsz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rvxsz webserver-deployment-c7997dcc8- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-c7997dcc8-rvxsz 4d16242c-f0d9-46e5-9714-efb534d3c3a7 3305464 0 2020-01-21 01:02:19 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6b8c4e7c-0615-4aa8-8a73-7a22449fe860 0xc004a85797 0xc004a85798}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:19 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:19 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:19 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:,StartTime:2020-01-21 01:02:19 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.105: INFO: Pod "webserver-deployment-c7997dcc8-s5lnr" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-s5lnr webserver-deployment-c7997dcc8- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-c7997dcc8-s5lnr 184fbe5d-1ecb-4087-835b-fe98b4c42043 3305593 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6b8c4e7c-0615-4aa8-8a73-7a22449fe860 0xc004a85917 0xc004a85918}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:27 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.1.234,PodIP:,StartTime:2020-01-21 01:02:27 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.106: INFO: Pod "webserver-deployment-c7997dcc8-vr6hx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vr6hx webserver-deployment-c7997dcc8- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-c7997dcc8-vr6hx ba3da8ba-543c-4c71-8142-8eb1fbb1f16d 3305566 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6b8c4e7c-0615-4aa8-8a73-7a22449fe860 0xc004a85a87 0xc004a85a88}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-mvvl6gufaqub,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.106: INFO: Pod "webserver-deployment-c7997dcc8-vxx2b" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vxx2b webserver-deployment-c7997dcc8- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-c7997dcc8-vxx2b 05eb1ed0-97bc-4afa-952d-be066744a706 3305551 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6b8c4e7c-0615-4aa8-8a73-7a22449fe860 0xc004a85ba7 0xc004a85ba8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Jan 21 01:02:33.106: INFO: Pod "webserver-deployment-c7997dcc8-zkncz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zkncz webserver-deployment-c7997dcc8- deployment-1023 /api/v1/namespaces/deployment-1023/pods/webserver-deployment-c7997dcc8-zkncz 3d244de3-aeb6-49f2-81ba-d7e574518c7c 3305536 0 2020-01-21 01:02:24 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 6b8c4e7c-0615-4aa8-8a73-7a22449fe860 0xc004a85cd7 0xc004a85cd8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-j2k7p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-j2k7p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-j2k7p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:02:24 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:02:33.106: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1023" for this suite.

• [SLOW TEST:42.213 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":198,"skipped":3289,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected combined 
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:02:35.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-projected-all-test-volume-e09ae2cd-8b5e-4b09-84b4-08000191d25b
STEP: Creating secret with name secret-projected-all-test-volume-7b44fa98-be69-4f06-8178-a6711e36297e
STEP: Creating a pod to test Check all projections for projected volume plugin
Jan 21 01:02:39.140: INFO: Waiting up to 5m0s for pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3" in namespace "projected-8887" to be "success or failure"
Jan 21 01:02:39.148: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 7.879424ms
Jan 21 01:02:41.455: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.31436426s
Jan 21 01:02:44.924: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 5.783683023s
Jan 21 01:02:47.703: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 8.562062812s
Jan 21 01:02:50.581: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 11.440076317s
Jan 21 01:02:52.602: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 13.461251053s
Jan 21 01:02:57.933: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 18.792858391s
Jan 21 01:03:00.350: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 21.209204513s
Jan 21 01:03:02.835: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 23.6948194s
Jan 21 01:03:05.086: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 25.94504325s
Jan 21 01:03:08.253: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 29.112763724s
Jan 21 01:03:10.452: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 31.311714878s
Jan 21 01:03:13.706: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 34.565135082s
Jan 21 01:03:16.062: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 36.921013572s
Jan 21 01:03:18.952: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 39.811281129s
Jan 21 01:03:20.962: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Pending", Reason="", readiness=false. Elapsed: 41.821034968s
Jan 21 01:03:22.969: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 43.828223665s
STEP: Saw pod success
Jan 21 01:03:22.969: INFO: Pod "projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3" satisfied condition "success or failure"
Jan 21 01:03:22.972: INFO: Trying to get logs from node jerma-node pod projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3 container projected-all-volume-test: 
STEP: delete the pod
Jan 21 01:03:23.080: INFO: Waiting for pod projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3 to disappear
Jan 21 01:03:23.085: INFO: Pod projected-volume-cee573ad-3029-4bc4-983d-4915318be2c3 no longer exists
[AfterEach] [sig-storage] Projected combined
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:03:23.085: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8887" for this suite.

• [SLOW TEST:47.379 seconds]
[sig-storage] Projected combined
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:31
  should project all components that make up the projection API [Projection][NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3319,"failed":0}
SSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:03:23.097: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-805.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-805.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-805.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-805.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-805.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-805.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-805.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-805.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-805.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-805.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Jan 21 01:03:33.360: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:33.368: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:33.373: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:33.379: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:33.395: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:33.400: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:33.405: INFO: Unable to read jessie_udp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:33.409: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:33.418: INFO: Lookups using dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local wheezy_udp@dns-test-service-2.dns-805.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-805.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local jessie_udp@dns-test-service-2.dns-805.svc.cluster.local jessie_tcp@dns-test-service-2.dns-805.svc.cluster.local]

Jan 21 01:03:38.435: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:38.446: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:38.451: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:38.456: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:38.488: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:38.494: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:38.501: INFO: Unable to read jessie_udp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:38.509: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:38.606: INFO: Lookups using dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local wheezy_udp@dns-test-service-2.dns-805.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-805.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local jessie_udp@dns-test-service-2.dns-805.svc.cluster.local jessie_tcp@dns-test-service-2.dns-805.svc.cluster.local]

Jan 21 01:03:43.427: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:43.432: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:43.437: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:43.442: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:43.471: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:43.474: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:43.478: INFO: Unable to read jessie_udp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:43.482: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:43.493: INFO: Lookups using dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local wheezy_udp@dns-test-service-2.dns-805.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-805.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local jessie_udp@dns-test-service-2.dns-805.svc.cluster.local jessie_tcp@dns-test-service-2.dns-805.svc.cluster.local]

Jan 21 01:03:48.430: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:48.438: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:48.444: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:48.451: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:48.469: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:48.477: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:48.483: INFO: Unable to read jessie_udp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:48.488: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:48.503: INFO: Lookups using dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local wheezy_udp@dns-test-service-2.dns-805.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-805.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local jessie_udp@dns-test-service-2.dns-805.svc.cluster.local jessie_tcp@dns-test-service-2.dns-805.svc.cluster.local]

Jan 21 01:03:53.427: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:53.434: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:53.439: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:53.443: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:53.455: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:53.458: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:53.463: INFO: Unable to read jessie_udp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:53.467: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:53.476: INFO: Lookups using dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local wheezy_udp@dns-test-service-2.dns-805.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-805.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local jessie_udp@dns-test-service-2.dns-805.svc.cluster.local jessie_tcp@dns-test-service-2.dns-805.svc.cluster.local]

Jan 21 01:03:58.430: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:58.436: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:58.441: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:58.447: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:58.464: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:58.469: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:58.474: INFO: Unable to read jessie_udp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:58.479: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-805.svc.cluster.local from pod dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1: the server could not find the requested resource (get pods dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1)
Jan 21 01:03:58.490: INFO: Lookups using dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local wheezy_udp@dns-test-service-2.dns-805.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-805.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-805.svc.cluster.local jessie_udp@dns-test-service-2.dns-805.svc.cluster.local jessie_tcp@dns-test-service-2.dns-805.svc.cluster.local]

Jan 21 01:04:03.485: INFO: DNS probes using dns-805/dns-test-e90d4cd4-eb23-4d39-91ad-ac0cfba027d1 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:04:03.769: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-805" for this suite.

• [SLOW TEST:40.732 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":200,"skipped":3328,"failed":0}
SSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:04:03.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 01:04:04.089: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-64a9d6bb-be05-49b7-9af2-b9b2a36dccb0" in namespace "security-context-test-5405" to be "success or failure"
Jan 21 01:04:04.237: INFO: Pod "busybox-privileged-false-64a9d6bb-be05-49b7-9af2-b9b2a36dccb0": Phase="Pending", Reason="", readiness=false. Elapsed: 147.24677ms
Jan 21 01:04:06.245: INFO: Pod "busybox-privileged-false-64a9d6bb-be05-49b7-9af2-b9b2a36dccb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155566588s
Jan 21 01:04:08.253: INFO: Pod "busybox-privileged-false-64a9d6bb-be05-49b7-9af2-b9b2a36dccb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.162727914s
Jan 21 01:04:10.260: INFO: Pod "busybox-privileged-false-64a9d6bb-be05-49b7-9af2-b9b2a36dccb0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.170559083s
Jan 21 01:04:12.269: INFO: Pod "busybox-privileged-false-64a9d6bb-be05-49b7-9af2-b9b2a36dccb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.179230947s
Jan 21 01:04:12.269: INFO: Pod "busybox-privileged-false-64a9d6bb-be05-49b7-9af2-b9b2a36dccb0" satisfied condition "success or failure"
Jan 21 01:04:12.290: INFO: Got logs for pod "busybox-privileged-false-64a9d6bb-be05-49b7-9af2-b9b2a36dccb0": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:04:12.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5405" for this suite.

• [SLOW TEST:8.474 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3336,"failed":0}
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:04:12.308: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod busybox-03159acd-d697-45c6-81e3-d23cd9e9232f in namespace container-probe-3706
Jan 21 01:04:20.492: INFO: Started pod busybox-03159acd-d697-45c6-81e3-d23cd9e9232f in namespace container-probe-3706
STEP: checking the pod's current state and verifying that restartCount is present
Jan 21 01:04:20.498: INFO: Initial restart count of pod busybox-03159acd-d697-45c6-81e3-d23cd9e9232f is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:08:21.775: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-3706" for this suite.

• [SLOW TEST:249.523 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":202,"skipped":3339,"failed":0}
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:08:21.832: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 21 01:08:21.951: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c191d2f7-ffec-4116-b30e-782453e7cad0" in namespace "downward-api-1527" to be "success or failure"
Jan 21 01:08:21.962: INFO: Pod "downwardapi-volume-c191d2f7-ffec-4116-b30e-782453e7cad0": Phase="Pending", Reason="", readiness=false. Elapsed: 10.75509ms
Jan 21 01:08:23.971: INFO: Pod "downwardapi-volume-c191d2f7-ffec-4116-b30e-782453e7cad0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019599593s
Jan 21 01:08:25.978: INFO: Pod "downwardapi-volume-c191d2f7-ffec-4116-b30e-782453e7cad0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026999969s
Jan 21 01:08:27.987: INFO: Pod "downwardapi-volume-c191d2f7-ffec-4116-b30e-782453e7cad0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.035655731s
Jan 21 01:08:29.995: INFO: Pod "downwardapi-volume-c191d2f7-ffec-4116-b30e-782453e7cad0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04312535s
STEP: Saw pod success
Jan 21 01:08:29.995: INFO: Pod "downwardapi-volume-c191d2f7-ffec-4116-b30e-782453e7cad0" satisfied condition "success or failure"
Jan 21 01:08:29.999: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-c191d2f7-ffec-4116-b30e-782453e7cad0 container client-container: 
STEP: delete the pod
Jan 21 01:08:30.077: INFO: Waiting for pod downwardapi-volume-c191d2f7-ffec-4116-b30e-782453e7cad0 to disappear
Jan 21 01:08:30.092: INFO: Pod downwardapi-volume-c191d2f7-ffec-4116-b30e-782453e7cad0 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:08:30.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1527" for this suite.

• [SLOW TEST:8.281 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3345,"failed":0}
SSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:08:30.114: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:08:35.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6409" for this suite.

• [SLOW TEST:5.647 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":204,"skipped":3353,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:08:35.762: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 21 01:08:36.565: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 21 01:08:38.598: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165716, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165716, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165716, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165716, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:08:40.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165716, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165716, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165716, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165716, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:08:42.628: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165716, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165716, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165716, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165716, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 21 01:08:45.625: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Jan 21 01:08:45.826: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:08:45.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3428" for this suite.
STEP: Destroying namespace "webhook-3428-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.414 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":205,"skipped":3371,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:08:46.177: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-28357425-a0a0-4fcd-8a24-e7eda8764a07
STEP: Creating a pod to test consume configMaps
Jan 21 01:08:46.287: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0f6810b6-b521-4a5e-8596-97e5a2437bff" in namespace "projected-7446" to be "success or failure"
Jan 21 01:08:46.294: INFO: Pod "pod-projected-configmaps-0f6810b6-b521-4a5e-8596-97e5a2437bff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.853944ms
Jan 21 01:08:48.302: INFO: Pod "pod-projected-configmaps-0f6810b6-b521-4a5e-8596-97e5a2437bff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014746865s
Jan 21 01:08:50.318: INFO: Pod "pod-projected-configmaps-0f6810b6-b521-4a5e-8596-97e5a2437bff": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030567559s
Jan 21 01:08:52.328: INFO: Pod "pod-projected-configmaps-0f6810b6-b521-4a5e-8596-97e5a2437bff": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040853646s
Jan 21 01:08:54.347: INFO: Pod "pod-projected-configmaps-0f6810b6-b521-4a5e-8596-97e5a2437bff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059759304s
STEP: Saw pod success
Jan 21 01:08:54.347: INFO: Pod "pod-projected-configmaps-0f6810b6-b521-4a5e-8596-97e5a2437bff" satisfied condition "success or failure"
Jan 21 01:08:54.351: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-0f6810b6-b521-4a5e-8596-97e5a2437bff container projected-configmap-volume-test: 
STEP: delete the pod
Jan 21 01:08:54.385: INFO: Waiting for pod pod-projected-configmaps-0f6810b6-b521-4a5e-8596-97e5a2437bff to disappear
Jan 21 01:08:54.397: INFO: Pod pod-projected-configmaps-0f6810b6-b521-4a5e-8596-97e5a2437bff no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:08:54.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7446" for this suite.

• [SLOW TEST:8.231 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3382,"failed":0}
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:08:54.410: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 01:08:54.492: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:08:55.250: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-3093" for this suite.
•{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works  [Conformance]","total":278,"completed":207,"skipped":3392,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:08:55.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 01:08:55.613: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 21 01:08:59.105: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6807 create -f -'
Jan 21 01:09:02.618: INFO: stderr: ""
Jan 21 01:09:02.618: INFO: stdout: "e2e-test-crd-publish-openapi-9885-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 21 01:09:02.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6807 delete e2e-test-crd-publish-openapi-9885-crds test-cr'
Jan 21 01:09:02.749: INFO: stderr: ""
Jan 21 01:09:02.749: INFO: stdout: "e2e-test-crd-publish-openapi-9885-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Jan 21 01:09:02.749: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6807 apply -f -'
Jan 21 01:09:03.240: INFO: stderr: ""
Jan 21 01:09:03.241: INFO: stdout: "e2e-test-crd-publish-openapi-9885-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Jan 21 01:09:03.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6807 delete e2e-test-crd-publish-openapi-9885-crds test-cr'
Jan 21 01:09:03.400: INFO: stderr: ""
Jan 21 01:09:03.401: INFO: stdout: "e2e-test-crd-publish-openapi-9885-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan 21 01:09:03.401: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-9885-crds'
Jan 21 01:09:03.788: INFO: stderr: ""
Jan 21 01:09:03.788: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-9885-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:09:07.679: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6807" for this suite.

• [SLOW TEST:12.315 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":208,"skipped":3416,"failed":0}
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:09:07.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 01:09:07.882: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Jan 21 01:09:08.047: INFO: Number of nodes with available pods: 0
Jan 21 01:09:08.048: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:09:09.059: INFO: Number of nodes with available pods: 0
Jan 21 01:09:09.060: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:09:10.149: INFO: Number of nodes with available pods: 0
Jan 21 01:09:10.149: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:09:11.126: INFO: Number of nodes with available pods: 0
Jan 21 01:09:11.126: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:09:12.062: INFO: Number of nodes with available pods: 0
Jan 21 01:09:12.062: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:09:14.690: INFO: Number of nodes with available pods: 0
Jan 21 01:09:14.690: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:09:15.470: INFO: Number of nodes with available pods: 1
Jan 21 01:09:15.470: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 01:09:16.098: INFO: Number of nodes with available pods: 1
Jan 21 01:09:16.098: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 01:09:17.066: INFO: Number of nodes with available pods: 2
Jan 21 01:09:17.066: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Jan 21 01:09:17.159: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:17.159: INFO: Wrong image for pod: daemon-set-z6kzj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:18.181: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:18.182: INFO: Wrong image for pod: daemon-set-z6kzj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:19.706: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:19.706: INFO: Wrong image for pod: daemon-set-z6kzj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:20.179: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:20.179: INFO: Wrong image for pod: daemon-set-z6kzj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:21.189: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:21.189: INFO: Wrong image for pod: daemon-set-z6kzj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:22.174: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:22.174: INFO: Wrong image for pod: daemon-set-z6kzj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:23.188: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:23.188: INFO: Wrong image for pod: daemon-set-z6kzj. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:23.188: INFO: Pod daemon-set-z6kzj is not available
Jan 21 01:09:24.173: INFO: Pod daemon-set-4k2nk is not available
Jan 21 01:09:24.173: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:25.176: INFO: Pod daemon-set-4k2nk is not available
Jan 21 01:09:25.176: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:26.264: INFO: Pod daemon-set-4k2nk is not available
Jan 21 01:09:26.265: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:27.179: INFO: Pod daemon-set-4k2nk is not available
Jan 21 01:09:27.180: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:28.678: INFO: Pod daemon-set-4k2nk is not available
Jan 21 01:09:28.679: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:29.489: INFO: Pod daemon-set-4k2nk is not available
Jan 21 01:09:29.489: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:30.402: INFO: Pod daemon-set-4k2nk is not available
Jan 21 01:09:30.402: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:31.272: INFO: Pod daemon-set-4k2nk is not available
Jan 21 01:09:31.272: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:32.174: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:33.176: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:34.175: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:35.176: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:36.179: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:36.180: INFO: Pod daemon-set-fc4kl is not available
Jan 21 01:09:37.177: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:37.177: INFO: Pod daemon-set-fc4kl is not available
Jan 21 01:09:38.174: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:38.175: INFO: Pod daemon-set-fc4kl is not available
Jan 21 01:09:39.181: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:39.182: INFO: Pod daemon-set-fc4kl is not available
Jan 21 01:09:40.174: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:40.174: INFO: Pod daemon-set-fc4kl is not available
Jan 21 01:09:41.174: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:41.174: INFO: Pod daemon-set-fc4kl is not available
Jan 21 01:09:42.173: INFO: Wrong image for pod: daemon-set-fc4kl. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine.
Jan 21 01:09:42.173: INFO: Pod daemon-set-fc4kl is not available
Jan 21 01:09:43.175: INFO: Pod daemon-set-htc2q is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Jan 21 01:09:43.191: INFO: Number of nodes with available pods: 1
Jan 21 01:09:43.191: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:09:44.203: INFO: Number of nodes with available pods: 1
Jan 21 01:09:44.204: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:09:45.208: INFO: Number of nodes with available pods: 1
Jan 21 01:09:45.208: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:09:46.206: INFO: Number of nodes with available pods: 1
Jan 21 01:09:46.206: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:09:47.207: INFO: Number of nodes with available pods: 1
Jan 21 01:09:47.207: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:09:48.212: INFO: Number of nodes with available pods: 1
Jan 21 01:09:48.212: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:09:49.201: INFO: Number of nodes with available pods: 2
Jan 21 01:09:49.201: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-1604, will wait for the garbage collector to delete the pods
Jan 21 01:09:49.279: INFO: Deleting DaemonSet.extensions daemon-set took: 8.517814ms
Jan 21 01:09:49.680: INFO: Terminating DaemonSet.extensions daemon-set pods took: 401.06737ms
Jan 21 01:10:02.422: INFO: Number of nodes with available pods: 0
Jan 21 01:10:02.423: INFO: Number of running nodes: 0, number of available pods: 0
Jan 21 01:10:02.431: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-1604/daemonsets","resourceVersion":"3307233"},"items":null}

Jan 21 01:10:02.436: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-1604/pods","resourceVersion":"3307233"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:10:02.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-1604" for this suite.

• [SLOW TEST:54.759 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":209,"skipped":3423,"failed":0}
S
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:10:02.470: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap that has name configmap-test-emptyKey-67bc48b6-afbd-49ea-93e1-27b3e88ff24b
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:10:02.659: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6857" for this suite.
•{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":210,"skipped":3424,"failed":0}
SSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:10:02.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating the pod
Jan 21 01:10:11.479: INFO: Successfully updated pod "annotationupdate9ac7064c-8e82-431e-aa75-b1d1192c691d"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:10:15.592: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8350" for this suite.

• [SLOW TEST:12.926 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3429,"failed":0}
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:10:15.603: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
Jan 21 01:10:15.753: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 21 01:10:15.794: INFO: Waiting for terminating namespaces to be deleted...
Jan 21 01:10:15.797: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 21 01:10:15.805: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 21 01:10:15.805: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 21 01:10:15.805: INFO: annotationupdate9ac7064c-8e82-431e-aa75-b1d1192c691d from projected-8350 started at 2020-01-21 01:10:02 +0000 UTC (1 container statuses recorded)
Jan 21 01:10:15.805: INFO: 	Container client-container ready: true, restart count 0
Jan 21 01:10:15.805: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 21 01:10:15.805: INFO: 	Container weave ready: true, restart count 1
Jan 21 01:10:15.805: INFO: 	Container weave-npc ready: true, restart count 0
Jan 21 01:10:15.805: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 21 01:10:15.821: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 21 01:10:15.821: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 21 01:10:15.821: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 21 01:10:15.821: INFO: 	Container etcd ready: true, restart count 1
Jan 21 01:10:15.821: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 21 01:10:15.821: INFO: 	Container coredns ready: true, restart count 0
Jan 21 01:10:15.821: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 21 01:10:15.821: INFO: 	Container coredns ready: true, restart count 0
Jan 21 01:10:15.821: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 21 01:10:15.821: INFO: 	Container weave ready: true, restart count 0
Jan 21 01:10:15.821: INFO: 	Container weave-npc ready: true, restart count 0
Jan 21 01:10:15.821: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 21 01:10:15.821: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 21 01:10:15.821: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 21 01:10:15.821: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 21 01:10:15.821: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 21 01:10:15.821: INFO: 	Container kube-scheduler ready: true, restart count 3
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-mvvl6gufaqub
Jan 21 01:10:15.982: INFO: Pod coredns-6955765f44-bhnn4 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 21 01:10:15.982: INFO: Pod coredns-6955765f44-bwd85 requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 21 01:10:15.982: INFO: Pod etcd-jerma-server-mvvl6gufaqub requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan 21 01:10:15.982: INFO: Pod kube-apiserver-jerma-server-mvvl6gufaqub requesting resource cpu=250m on Node jerma-server-mvvl6gufaqub
Jan 21 01:10:15.982: INFO: Pod kube-controller-manager-jerma-server-mvvl6gufaqub requesting resource cpu=200m on Node jerma-server-mvvl6gufaqub
Jan 21 01:10:15.982: INFO: Pod kube-proxy-chkps requesting resource cpu=0m on Node jerma-server-mvvl6gufaqub
Jan 21 01:10:15.982: INFO: Pod kube-proxy-dsf66 requesting resource cpu=0m on Node jerma-node
Jan 21 01:10:15.982: INFO: Pod kube-scheduler-jerma-server-mvvl6gufaqub requesting resource cpu=100m on Node jerma-server-mvvl6gufaqub
Jan 21 01:10:15.982: INFO: Pod weave-net-kz8lv requesting resource cpu=20m on Node jerma-node
Jan 21 01:10:15.982: INFO: Pod weave-net-z6tjf requesting resource cpu=20m on Node jerma-server-mvvl6gufaqub
Jan 21 01:10:15.982: INFO: Pod annotationupdate9ac7064c-8e82-431e-aa75-b1d1192c691d requesting resource cpu=0m on Node jerma-node
STEP: Starting Pods to consume most of the cluster CPU.
Jan 21 01:10:15.983: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
Jan 21 01:10:15.988: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-mvvl6gufaqub
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-18aadff4-bf88-41a1-8f7f-6a3a07504269.15ebc1a6b27ed214], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9560/filler-pod-18aadff4-bf88-41a1-8f7f-6a3a07504269 to jerma-server-mvvl6gufaqub]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-18aadff4-bf88-41a1-8f7f-6a3a07504269.15ebc1a7c5b017b9], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-18aadff4-bf88-41a1-8f7f-6a3a07504269.15ebc1a880faffac], Reason = [Created], Message = [Created container filler-pod-18aadff4-bf88-41a1-8f7f-6a3a07504269]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-18aadff4-bf88-41a1-8f7f-6a3a07504269.15ebc1a89b755bc7], Reason = [Started], Message = [Started container filler-pod-18aadff4-bf88-41a1-8f7f-6a3a07504269]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e30e1786-9a97-4237-9cb5-88acb86a8c45.15ebc1a6b05c0f4a], Reason = [Scheduled], Message = [Successfully assigned sched-pred-9560/filler-pod-e30e1786-9a97-4237-9cb5-88acb86a8c45 to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e30e1786-9a97-4237-9cb5-88acb86a8c45.15ebc1a77ddaaa1e], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e30e1786-9a97-4237-9cb5-88acb86a8c45.15ebc1a7dcdd337a], Reason = [Created], Message = [Created container filler-pod-e30e1786-9a97-4237-9cb5-88acb86a8c45]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-e30e1786-9a97-4237-9cb5-88acb86a8c45.15ebc1a7fed29b85], Reason = [Started], Message = [Started container filler-pod-e30e1786-9a97-4237-9cb5-88acb86a8c45]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ebc1a90e458211], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15ebc1a912557227], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-server-mvvl6gufaqub
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:10:27.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9560" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78

• [SLOW TEST:11.717 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run  [Conformance]","total":278,"completed":212,"skipped":3434,"failed":0}
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:10:27.320: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1693
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 21 01:10:27.460: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-2201'
Jan 21 01:10:27.706: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 21 01:10:27.706: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Jan 21 01:10:27.728: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Jan 21 01:10:27.736: INFO: scanned /root for discovery docs: 
Jan 21 01:10:27.736: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-2201'
Jan 21 01:10:50.614: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Jan 21 01:10:50.615: INFO: stdout: "Created e2e-test-httpd-rc-f473151a38c1fc826bc9bb022dc8e01d\nScaling up e2e-test-httpd-rc-f473151a38c1fc826bc9bb022dc8e01d from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-f473151a38c1fc826bc9bb022dc8e01d up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-f473151a38c1fc826bc9bb022dc8e01d to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Jan 21 01:10:50.615: INFO: stdout: "Created e2e-test-httpd-rc-f473151a38c1fc826bc9bb022dc8e01d\nScaling up e2e-test-httpd-rc-f473151a38c1fc826bc9bb022dc8e01d from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-f473151a38c1fc826bc9bb022dc8e01d up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-f473151a38c1fc826bc9bb022dc8e01d to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Jan 21 01:10:50.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-2201'
Jan 21 01:10:50.756: INFO: stderr: ""
Jan 21 01:10:50.757: INFO: stdout: "e2e-test-httpd-rc-f473151a38c1fc826bc9bb022dc8e01d-8qwpt "
Jan 21 01:10:50.757: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-f473151a38c1fc826bc9bb022dc8e01d-8qwpt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-2201'
Jan 21 01:10:50.908: INFO: stderr: ""
Jan 21 01:10:50.908: INFO: stdout: "true"
Jan 21 01:10:50.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-f473151a38c1fc826bc9bb022dc8e01d-8qwpt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-2201'
Jan 21 01:10:51.021: INFO: stderr: ""
Jan 21 01:10:51.022: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Jan 21 01:10:51.022: INFO: e2e-test-httpd-rc-f473151a38c1fc826bc9bb022dc8e01d-8qwpt is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1699
Jan 21 01:10:51.022: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-2201'
Jan 21 01:10:51.182: INFO: stderr: ""
Jan 21 01:10:51.182: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:10:51.183: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2201" for this suite.

• [SLOW TEST:23.920 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1688
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image  [Conformance]","total":278,"completed":213,"skipped":3444,"failed":0}
SSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:10:51.242: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 21 01:10:52.184: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 21 01:10:54.202: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165852, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165852, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165852, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165852, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:10:56.213: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165852, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165852, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165852, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165852, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:10:58.212: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165852, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165852, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165852, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715165852, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 21 01:11:01.266: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:11:01.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6757" for this suite.
STEP: Destroying namespace "webhook-6757-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.215 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":214,"skipped":3447,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:11:01.458: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 21 01:11:01.713: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0c4dc077-de2e-428c-b91c-aa6d0d06ddba" in namespace "projected-5689" to be "success or failure"
Jan 21 01:11:01.731: INFO: Pod "downwardapi-volume-0c4dc077-de2e-428c-b91c-aa6d0d06ddba": Phase="Pending", Reason="", readiness=false. Elapsed: 18.010131ms
Jan 21 01:11:03.739: INFO: Pod "downwardapi-volume-0c4dc077-de2e-428c-b91c-aa6d0d06ddba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025462457s
Jan 21 01:11:05.757: INFO: Pod "downwardapi-volume-0c4dc077-de2e-428c-b91c-aa6d0d06ddba": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043878073s
Jan 21 01:11:07.772: INFO: Pod "downwardapi-volume-0c4dc077-de2e-428c-b91c-aa6d0d06ddba": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058209098s
Jan 21 01:11:09.782: INFO: Pod "downwardapi-volume-0c4dc077-de2e-428c-b91c-aa6d0d06ddba": Phase="Pending", Reason="", readiness=false. Elapsed: 8.068466296s
Jan 21 01:11:11.793: INFO: Pod "downwardapi-volume-0c4dc077-de2e-428c-b91c-aa6d0d06ddba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.080066875s
STEP: Saw pod success
Jan 21 01:11:11.794: INFO: Pod "downwardapi-volume-0c4dc077-de2e-428c-b91c-aa6d0d06ddba" satisfied condition "success or failure"
Jan 21 01:11:11.799: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-0c4dc077-de2e-428c-b91c-aa6d0d06ddba container client-container: 
STEP: delete the pod
Jan 21 01:11:11.885: INFO: Waiting for pod downwardapi-volume-0c4dc077-de2e-428c-b91c-aa6d0d06ddba to disappear
Jan 21 01:11:11.892: INFO: Pod downwardapi-volume-0c4dc077-de2e-428c-b91c-aa6d0d06ddba no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:11:11.892: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5689" for this suite.

• [SLOW TEST:10.496 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3474,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:11:11.956: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0777 on tmpfs
Jan 21 01:11:12.025: INFO: Waiting up to 5m0s for pod "pod-58e233c3-e86d-4002-8eba-6395eae886f0" in namespace "emptydir-6551" to be "success or failure"
Jan 21 01:11:12.034: INFO: Pod "pod-58e233c3-e86d-4002-8eba-6395eae886f0": Phase="Pending", Reason="", readiness=false. Elapsed: 8.402108ms
Jan 21 01:11:14.045: INFO: Pod "pod-58e233c3-e86d-4002-8eba-6395eae886f0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019958431s
Jan 21 01:11:16.054: INFO: Pod "pod-58e233c3-e86d-4002-8eba-6395eae886f0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.029209313s
Jan 21 01:11:18.064: INFO: Pod "pod-58e233c3-e86d-4002-8eba-6395eae886f0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039093761s
Jan 21 01:11:20.110: INFO: Pod "pod-58e233c3-e86d-4002-8eba-6395eae886f0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.085067151s
STEP: Saw pod success
Jan 21 01:11:20.110: INFO: Pod "pod-58e233c3-e86d-4002-8eba-6395eae886f0" satisfied condition "success or failure"
Jan 21 01:11:20.115: INFO: Trying to get logs from node jerma-node pod pod-58e233c3-e86d-4002-8eba-6395eae886f0 container test-container: 
STEP: delete the pod
Jan 21 01:11:20.514: INFO: Waiting for pod pod-58e233c3-e86d-4002-8eba-6395eae886f0 to disappear
Jan 21 01:11:20.556: INFO: Pod pod-58e233c3-e86d-4002-8eba-6395eae886f0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:11:20.557: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6551" for this suite.

• [SLOW TEST:8.621 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":216,"skipped":3508,"failed":0}
SSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:11:20.577: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service endpoint-test2 in namespace services-4435
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4435 to expose endpoints map[]
Jan 21 01:11:20.824: INFO: successfully validated that service endpoint-test2 in namespace services-4435 exposes endpoints map[] (16.426928ms elapsed)
STEP: Creating pod pod1 in namespace services-4435
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4435 to expose endpoints map[pod1:[80]]
Jan 21 01:11:24.997: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.145137332s elapsed, will retry)
Jan 21 01:11:27.042: INFO: successfully validated that service endpoint-test2 in namespace services-4435 exposes endpoints map[pod1:[80]] (6.190394395s elapsed)
STEP: Creating pod pod2 in namespace services-4435
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4435 to expose endpoints map[pod1:[80] pod2:[80]]
Jan 21 01:11:31.931: INFO: Unexpected endpoints: found map[cc860236-b363-485c-a667-0b79585cd12f:[80]], expected map[pod1:[80] pod2:[80]] (4.882588609s elapsed, will retry)
Jan 21 01:11:33.975: INFO: successfully validated that service endpoint-test2 in namespace services-4435 exposes endpoints map[pod1:[80] pod2:[80]] (6.926900149s elapsed)
STEP: Deleting pod pod1 in namespace services-4435
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4435 to expose endpoints map[pod2:[80]]
Jan 21 01:11:34.079: INFO: successfully validated that service endpoint-test2 in namespace services-4435 exposes endpoints map[pod2:[80]] (35.973133ms elapsed)
STEP: Deleting pod pod2 in namespace services-4435
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-4435 to expose endpoints map[]
Jan 21 01:11:35.102: INFO: successfully validated that service endpoint-test2 in namespace services-4435 exposes endpoints map[] (1.016695517s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:11:35.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4435" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691

• [SLOW TEST:14.604 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods  [Conformance]","total":278,"completed":217,"skipped":3513,"failed":0}
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:11:35.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap configmap-830/configmap-test-77a87e54-2d07-40e2-a04d-1b96a2e2d51b
STEP: Creating a pod to test consume configMaps
Jan 21 01:11:35.312: INFO: Waiting up to 5m0s for pod "pod-configmaps-fcef4924-588b-4bc3-8bc8-3877ca40e786" in namespace "configmap-830" to be "success or failure"
Jan 21 01:11:35.318: INFO: Pod "pod-configmaps-fcef4924-588b-4bc3-8bc8-3877ca40e786": Phase="Pending", Reason="", readiness=false. Elapsed: 5.18156ms
Jan 21 01:11:37.631: INFO: Pod "pod-configmaps-fcef4924-588b-4bc3-8bc8-3877ca40e786": Phase="Pending", Reason="", readiness=false. Elapsed: 2.318217105s
Jan 21 01:11:39.639: INFO: Pod "pod-configmaps-fcef4924-588b-4bc3-8bc8-3877ca40e786": Phase="Pending", Reason="", readiness=false. Elapsed: 4.326147406s
Jan 21 01:11:41.645: INFO: Pod "pod-configmaps-fcef4924-588b-4bc3-8bc8-3877ca40e786": Phase="Pending", Reason="", readiness=false. Elapsed: 6.332951417s
Jan 21 01:11:43.652: INFO: Pod "pod-configmaps-fcef4924-588b-4bc3-8bc8-3877ca40e786": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.339449615s
STEP: Saw pod success
Jan 21 01:11:43.652: INFO: Pod "pod-configmaps-fcef4924-588b-4bc3-8bc8-3877ca40e786" satisfied condition "success or failure"
Jan 21 01:11:43.654: INFO: Trying to get logs from node jerma-node pod pod-configmaps-fcef4924-588b-4bc3-8bc8-3877ca40e786 container env-test: 
STEP: delete the pod
Jan 21 01:11:43.696: INFO: Waiting for pod pod-configmaps-fcef4924-588b-4bc3-8bc8-3877ca40e786 to disappear
Jan 21 01:11:43.707: INFO: Pod pod-configmaps-fcef4924-588b-4bc3-8bc8-3877ca40e786 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:11:43.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-830" for this suite.

• [SLOW TEST:8.553 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3513,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:11:43.735: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 21 01:11:44.014: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3ee24145-0f90-4b5c-a9d3-d66c20197b3c" in namespace "downward-api-5132" to be "success or failure"
Jan 21 01:11:44.035: INFO: Pod "downwardapi-volume-3ee24145-0f90-4b5c-a9d3-d66c20197b3c": Phase="Pending", Reason="", readiness=false. Elapsed: 20.334161ms
Jan 21 01:11:46.052: INFO: Pod "downwardapi-volume-3ee24145-0f90-4b5c-a9d3-d66c20197b3c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03802754s
Jan 21 01:11:48.060: INFO: Pod "downwardapi-volume-3ee24145-0f90-4b5c-a9d3-d66c20197b3c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.045320446s
Jan 21 01:11:50.069: INFO: Pod "downwardapi-volume-3ee24145-0f90-4b5c-a9d3-d66c20197b3c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054982463s
Jan 21 01:11:52.084: INFO: Pod "downwardapi-volume-3ee24145-0f90-4b5c-a9d3-d66c20197b3c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.069924104s
Jan 21 01:11:54.094: INFO: Pod "downwardapi-volume-3ee24145-0f90-4b5c-a9d3-d66c20197b3c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.079890974s
STEP: Saw pod success
Jan 21 01:11:54.095: INFO: Pod "downwardapi-volume-3ee24145-0f90-4b5c-a9d3-d66c20197b3c" satisfied condition "success or failure"
Jan 21 01:11:54.099: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-3ee24145-0f90-4b5c-a9d3-d66c20197b3c container client-container: 
STEP: delete the pod
Jan 21 01:11:54.260: INFO: Waiting for pod downwardapi-volume-3ee24145-0f90-4b5c-a9d3-d66c20197b3c to disappear
Jan 21 01:11:54.283: INFO: Pod downwardapi-volume-3ee24145-0f90-4b5c-a9d3-d66c20197b3c no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:11:54.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5132" for this suite.

• [SLOW TEST:10.581 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3532,"failed":0}
S
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:11:54.318: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
Jan 21 01:11:54.443: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 21 01:11:54.523: INFO: Waiting for terminating namespaces to be deleted...
Jan 21 01:11:54.527: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 21 01:11:54.537: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 21 01:11:54.537: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 21 01:11:54.537: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 21 01:11:54.537: INFO: 	Container weave ready: true, restart count 1
Jan 21 01:11:54.537: INFO: 	Container weave-npc ready: true, restart count 0
Jan 21 01:11:54.537: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 21 01:11:54.557: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 21 01:11:54.557: INFO: 	Container kube-controller-manager ready: true, restart count 3
Jan 21 01:11:54.557: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 21 01:11:54.557: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 21 01:11:54.557: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 21 01:11:54.557: INFO: 	Container weave ready: true, restart count 0
Jan 21 01:11:54.557: INFO: 	Container weave-npc ready: true, restart count 0
Jan 21 01:11:54.557: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 21 01:11:54.557: INFO: 	Container kube-scheduler ready: true, restart count 3
Jan 21 01:11:54.557: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 21 01:11:54.557: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 21 01:11:54.557: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 21 01:11:54.557: INFO: 	Container etcd ready: true, restart count 1
Jan 21 01:11:54.557: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 21 01:11:54.557: INFO: 	Container coredns ready: true, restart count 0
Jan 21 01:11:54.557: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 21 01:11:54.557: INFO: 	Container coredns ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-ce49037c-b7ee-4d77-a8cb-86e48ea7a761 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-ce49037c-b7ee-4d77-a8cb-86e48ea7a761 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-ce49037c-b7ee-4d77-a8cb-86e48ea7a761
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:12:26.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-8721" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78

• [SLOW TEST:32.628 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":220,"skipped":3533,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:12:26.947: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-9cddedfc-a8b2-4eb5-8788-ea92d651b803
STEP: Creating a pod to test consume configMaps
Jan 21 01:12:27.131: INFO: Waiting up to 5m0s for pod "pod-configmaps-c3729913-e466-4de7-afec-40fa535fed10" in namespace "configmap-7004" to be "success or failure"
Jan 21 01:12:27.151: INFO: Pod "pod-configmaps-c3729913-e466-4de7-afec-40fa535fed10": Phase="Pending", Reason="", readiness=false. Elapsed: 19.012188ms
Jan 21 01:12:29.162: INFO: Pod "pod-configmaps-c3729913-e466-4de7-afec-40fa535fed10": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030456402s
Jan 21 01:12:31.170: INFO: Pod "pod-configmaps-c3729913-e466-4de7-afec-40fa535fed10": Phase="Pending", Reason="", readiness=false. Elapsed: 4.038301966s
Jan 21 01:12:33.180: INFO: Pod "pod-configmaps-c3729913-e466-4de7-afec-40fa535fed10": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.048035987s
STEP: Saw pod success
Jan 21 01:12:33.180: INFO: Pod "pod-configmaps-c3729913-e466-4de7-afec-40fa535fed10" satisfied condition "success or failure"
Jan 21 01:12:33.184: INFO: Trying to get logs from node jerma-node pod pod-configmaps-c3729913-e466-4de7-afec-40fa535fed10 container configmap-volume-test: 
STEP: delete the pod
Jan 21 01:12:33.391: INFO: Waiting for pod pod-configmaps-c3729913-e466-4de7-afec-40fa535fed10 to disappear
Jan 21 01:12:33.451: INFO: Pod pod-configmaps-c3729913-e466-4de7-afec-40fa535fed10 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:12:33.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-7004" for this suite.

• [SLOW TEST:6.513 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3544,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:12:33.462: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-7c6dea28-081d-4a03-ad54-c68aebf4e6ca
STEP: Creating a pod to test consume secrets
Jan 21 01:12:33.738: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9f166d2f-f555-4834-adac-e49fb0f5f891" in namespace "projected-4007" to be "success or failure"
Jan 21 01:12:33.793: INFO: Pod "pod-projected-secrets-9f166d2f-f555-4834-adac-e49fb0f5f891": Phase="Pending", Reason="", readiness=false. Elapsed: 54.847666ms
Jan 21 01:12:35.799: INFO: Pod "pod-projected-secrets-9f166d2f-f555-4834-adac-e49fb0f5f891": Phase="Pending", Reason="", readiness=false. Elapsed: 2.061201288s
Jan 21 01:12:37.806: INFO: Pod "pod-projected-secrets-9f166d2f-f555-4834-adac-e49fb0f5f891": Phase="Pending", Reason="", readiness=false. Elapsed: 4.068476636s
Jan 21 01:12:39.815: INFO: Pod "pod-projected-secrets-9f166d2f-f555-4834-adac-e49fb0f5f891": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077120654s
Jan 21 01:12:42.688: INFO: Pod "pod-projected-secrets-9f166d2f-f555-4834-adac-e49fb0f5f891": Phase="Pending", Reason="", readiness=false. Elapsed: 8.950730588s
Jan 21 01:12:44.695: INFO: Pod "pod-projected-secrets-9f166d2f-f555-4834-adac-e49fb0f5f891": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.957342559s
STEP: Saw pod success
Jan 21 01:12:44.695: INFO: Pod "pod-projected-secrets-9f166d2f-f555-4834-adac-e49fb0f5f891" satisfied condition "success or failure"
Jan 21 01:12:44.698: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-9f166d2f-f555-4834-adac-e49fb0f5f891 container projected-secret-volume-test: 
STEP: delete the pod
Jan 21 01:12:44.804: INFO: Waiting for pod pod-projected-secrets-9f166d2f-f555-4834-adac-e49fb0f5f891 to disappear
Jan 21 01:12:44.835: INFO: Pod pod-projected-secrets-9f166d2f-f555-4834-adac-e49fb0f5f891 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:12:44.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4007" for this suite.

• [SLOW TEST:11.438 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":222,"skipped":3559,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:12:44.901: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-map-823ba68c-ccc4-4f8f-ade6-f528e340455b
STEP: Creating a pod to test consume secrets
Jan 21 01:12:45.155: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-cd980305-6795-42c5-8556-5caefb6e51c2" in namespace "projected-7794" to be "success or failure"
Jan 21 01:12:45.173: INFO: Pod "pod-projected-secrets-cd980305-6795-42c5-8556-5caefb6e51c2": Phase="Pending", Reason="", readiness=false. Elapsed: 17.670797ms
Jan 21 01:12:47.180: INFO: Pod "pod-projected-secrets-cd980305-6795-42c5-8556-5caefb6e51c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024698347s
Jan 21 01:12:49.225: INFO: Pod "pod-projected-secrets-cd980305-6795-42c5-8556-5caefb6e51c2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.069846051s
Jan 21 01:12:51.231: INFO: Pod "pod-projected-secrets-cd980305-6795-42c5-8556-5caefb6e51c2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075828288s
Jan 21 01:12:53.243: INFO: Pod "pod-projected-secrets-cd980305-6795-42c5-8556-5caefb6e51c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.08837195s
STEP: Saw pod success
Jan 21 01:12:53.243: INFO: Pod "pod-projected-secrets-cd980305-6795-42c5-8556-5caefb6e51c2" satisfied condition "success or failure"
Jan 21 01:12:53.246: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-cd980305-6795-42c5-8556-5caefb6e51c2 container projected-secret-volume-test: 
STEP: delete the pod
Jan 21 01:12:53.292: INFO: Waiting for pod pod-projected-secrets-cd980305-6795-42c5-8556-5caefb6e51c2 to disappear
Jan 21 01:12:53.302: INFO: Pod pod-projected-secrets-cd980305-6795-42c5-8556-5caefb6e51c2 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:12:53.303: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7794" for this suite.

• [SLOW TEST:8.410 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":223,"skipped":3577,"failed":0}
SSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:12:53.312: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
Jan 21 01:12:53.442: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Jan 21 01:12:53.468: INFO: Waiting for terminating namespaces to be deleted...
Jan 21 01:12:53.519: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Jan 21 01:12:53.526: INFO: kube-proxy-dsf66 from kube-system started at 2020-01-04 11:59:52 +0000 UTC (1 container statuses recorded)
Jan 21 01:12:53.526: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 21 01:12:53.526: INFO: weave-net-kz8lv from kube-system started at 2020-01-04 11:59:52 +0000 UTC (2 container statuses recorded)
Jan 21 01:12:53.526: INFO: 	Container weave ready: true, restart count 1
Jan 21 01:12:53.526: INFO: 	Container weave-npc ready: true, restart count 0
Jan 21 01:12:53.526: INFO: 
Logging pods the kubelet thinks is on node jerma-server-mvvl6gufaqub before test
Jan 21 01:12:53.532: INFO: kube-scheduler-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 21 01:12:53.532: INFO: 	Container kube-scheduler ready: true, restart count 3
Jan 21 01:12:53.532: INFO: etcd-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:54 +0000 UTC (1 container statuses recorded)
Jan 21 01:12:53.532: INFO: 	Container etcd ready: true, restart count 1
Jan 21 01:12:53.532: INFO: kube-apiserver-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 21 01:12:53.532: INFO: 	Container kube-apiserver ready: true, restart count 1
Jan 21 01:12:53.532: INFO: coredns-6955765f44-bwd85 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 21 01:12:53.532: INFO: 	Container coredns ready: true, restart count 0
Jan 21 01:12:53.532: INFO: coredns-6955765f44-bhnn4 from kube-system started at 2020-01-04 11:48:47 +0000 UTC (1 container statuses recorded)
Jan 21 01:12:53.532: INFO: 	Container coredns ready: true, restart count 0
Jan 21 01:12:53.532: INFO: kube-proxy-chkps from kube-system started at 2020-01-04 11:48:11 +0000 UTC (1 container statuses recorded)
Jan 21 01:12:53.532: INFO: 	Container kube-proxy ready: true, restart count 0
Jan 21 01:12:53.532: INFO: weave-net-z6tjf from kube-system started at 2020-01-04 11:48:11 +0000 UTC (2 container statuses recorded)
Jan 21 01:12:53.532: INFO: 	Container weave ready: true, restart count 0
Jan 21 01:12:53.532: INFO: 	Container weave-npc ready: true, restart count 0
Jan 21 01:12:53.532: INFO: kube-controller-manager-jerma-server-mvvl6gufaqub from kube-system started at 2020-01-04 11:47:53 +0000 UTC (1 container statuses recorded)
Jan 21 01:12:53.532: INFO: 	Container kube-controller-manager ready: true, restart count 3
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-4bc8e2f5-f333-4618-a50b-5e8049211f65 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-4bc8e2f5-f333-4618-a50b-5e8049211f65 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-4bc8e2f5-f333-4618-a50b-5e8049211f65
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:13:09.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6161" for this suite.
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78

• [SLOW TEST:16.687 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:39
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching  [Conformance]","total":278,"completed":224,"skipped":3583,"failed":0}
SSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:13:09.999: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-vqmgg in namespace proxy-7248
I0121 01:13:10.204947       8 runners.go:189] Created replication controller with name: proxy-service-vqmgg, namespace: proxy-7248, replica count: 1
I0121 01:13:11.256815       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:13:12.257535       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:13:13.258185       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:13:14.259049       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:13:15.259901       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:13:16.260512       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:13:17.261224       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:13:18.262367       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0121 01:13:19.263143       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0121 01:13:20.263714       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0121 01:13:21.264635       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0121 01:13:22.265423       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0121 01:13:23.266110       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0121 01:13:24.266726       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0121 01:13:25.267244       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0121 01:13:26.267727       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I0121 01:13:27.269895       8 runners.go:189] proxy-service-vqmgg Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 21 01:13:27.280: INFO: setup took 17.137651054s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Jan 21 01:13:27.297: INFO: (0) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 16.455201ms)
Jan 21 01:13:27.306: INFO: (0) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname2/proxy/: bar (200; 23.020136ms)
Jan 21 01:13:27.307: INFO: (0) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:1080/proxy/: ... (200; 25.482093ms)
Jan 21 01:13:27.309: INFO: (0) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 26.324658ms)
Jan 21 01:13:27.310: INFO: (0) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 27.538803ms)
Jan 21 01:13:27.311: INFO: (0) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname2/proxy/: bar (200; 29.345081ms)
Jan 21 01:13:27.311: INFO: (0) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 30.688848ms)
Jan 21 01:13:27.315: INFO: (0) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:1080/proxy/: test<... (200; 34.078729ms)
Jan 21 01:13:27.315: INFO: (0) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname1/proxy/: foo (200; 33.483801ms)
Jan 21 01:13:27.315: INFO: (0) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4/proxy/: test (200; 32.381327ms)
Jan 21 01:13:27.316: INFO: (0) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname1/proxy/: foo (200; 34.710045ms)
Jan 21 01:13:27.323: INFO: (0) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname2/proxy/: tls qux (200; 42.618009ms)
Jan 21 01:13:27.323: INFO: (0) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname1/proxy/: tls baz (200; 41.194692ms)
Jan 21 01:13:27.323: INFO: (0) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: test (200; 13.978602ms)
Jan 21 01:13:27.339: INFO: (1) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 14.414849ms)
Jan 21 01:13:27.339: INFO: (1) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname2/proxy/: bar (200; 14.926918ms)
Jan 21 01:13:27.340: INFO: (1) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 16.103543ms)
Jan 21 01:13:27.340: INFO: (1) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 15.741065ms)
Jan 21 01:13:27.340: INFO: (1) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:460/proxy/: tls baz (200; 15.748632ms)
Jan 21 01:13:27.340: INFO: (1) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname1/proxy/: foo (200; 15.765165ms)
Jan 21 01:13:27.340: INFO: (1) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:1080/proxy/: test<... (200; 16.004593ms)
Jan 21 01:13:27.340: INFO: (1) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:1080/proxy/: ... (200; 16.014777ms)
Jan 21 01:13:27.341: INFO: (1) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:462/proxy/: tls qux (200; 16.235129ms)
Jan 21 01:13:27.341: INFO: (1) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname2/proxy/: tls qux (200; 16.490431ms)
Jan 21 01:13:27.344: INFO: (1) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname2/proxy/: bar (200; 19.822492ms)
Jan 21 01:13:27.345: INFO: (1) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname1/proxy/: tls baz (200; 20.518378ms)
Jan 21 01:13:27.346: INFO: (1) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname1/proxy/: foo (200; 21.668133ms)
Jan 21 01:13:27.353: INFO: (2) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4/proxy/: test (200; 6.973144ms)
Jan 21 01:13:27.353: INFO: (2) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 7.315602ms)
Jan 21 01:13:27.361: INFO: (2) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:1080/proxy/: test<... (200; 15.478383ms)
Jan 21 01:13:27.362: INFO: (2) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname1/proxy/: tls baz (200; 15.652075ms)
Jan 21 01:13:27.362: INFO: (2) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 15.595781ms)
Jan 21 01:13:27.362: INFO: (2) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 16.420918ms)
Jan 21 01:13:27.365: INFO: (2) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 19.245786ms)
Jan 21 01:13:27.366: INFO: (2) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:462/proxy/: tls qux (200; 19.786984ms)
Jan 21 01:13:27.366: INFO: (2) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: ... (200; 19.876854ms)
Jan 21 01:13:27.366: INFO: (2) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname1/proxy/: foo (200; 19.837838ms)
Jan 21 01:13:27.366: INFO: (2) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname2/proxy/: tls qux (200; 19.721831ms)
Jan 21 01:13:27.366: INFO: (2) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:460/proxy/: tls baz (200; 19.554725ms)
Jan 21 01:13:27.366: INFO: (2) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname2/proxy/: bar (200; 20.27642ms)
Jan 21 01:13:27.367: INFO: (2) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname2/proxy/: bar (200; 20.722934ms)
Jan 21 01:13:27.367: INFO: (2) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname1/proxy/: foo (200; 20.700295ms)
Jan 21 01:13:27.375: INFO: (3) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 7.864315ms)
Jan 21 01:13:27.375: INFO: (3) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 7.757467ms)
Jan 21 01:13:27.376: INFO: (3) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: test (200; 9.764631ms)
Jan 21 01:13:27.378: INFO: (3) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname2/proxy/: bar (200; 10.851611ms)
Jan 21 01:13:27.381: INFO: (3) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 14.227206ms)
Jan 21 01:13:27.382: INFO: (3) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname2/proxy/: bar (200; 14.588718ms)
Jan 21 01:13:27.382: INFO: (3) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:462/proxy/: tls qux (200; 15.079394ms)
Jan 21 01:13:27.382: INFO: (3) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname1/proxy/: tls baz (200; 14.888963ms)
Jan 21 01:13:27.382: INFO: (3) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:1080/proxy/: test<... (200; 14.91716ms)
Jan 21 01:13:27.382: INFO: (3) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 15.301835ms)
Jan 21 01:13:27.384: INFO: (3) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:1080/proxy/: ... (200; 16.410825ms)
Jan 21 01:13:27.384: INFO: (3) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname2/proxy/: tls qux (200; 16.844569ms)
Jan 21 01:13:27.384: INFO: (3) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname1/proxy/: foo (200; 16.749336ms)
Jan 21 01:13:27.384: INFO: (3) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname1/proxy/: foo (200; 16.937523ms)
Jan 21 01:13:27.385: INFO: (3) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:460/proxy/: tls baz (200; 17.583874ms)
Jan 21 01:13:27.392: INFO: (4) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 7.412632ms)
Jan 21 01:13:27.404: INFO: (4) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: ... (200; 19.121778ms)
Jan 21 01:13:27.404: INFO: (4) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname2/proxy/: bar (200; 17.976429ms)
Jan 21 01:13:27.404: INFO: (4) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname1/proxy/: foo (200; 18.817673ms)
Jan 21 01:13:27.406: INFO: (4) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname1/proxy/: foo (200; 20.331799ms)
Jan 21 01:13:27.406: INFO: (4) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:460/proxy/: tls baz (200; 19.992892ms)
Jan 21 01:13:27.406: INFO: (4) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname1/proxy/: tls baz (200; 21.400065ms)
Jan 21 01:13:27.406: INFO: (4) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname2/proxy/: bar (200; 21.229796ms)
Jan 21 01:13:27.407: INFO: (4) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 21.710815ms)
Jan 21 01:13:27.408: INFO: (4) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname2/proxy/: tls qux (200; 22.644678ms)
Jan 21 01:13:27.408: INFO: (4) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 22.135131ms)
Jan 21 01:13:27.408: INFO: (4) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4/proxy/: test (200; 22.297684ms)
Jan 21 01:13:27.408: INFO: (4) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:462/proxy/: tls qux (200; 22.648485ms)
Jan 21 01:13:27.408: INFO: (4) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 22.336435ms)
Jan 21 01:13:27.408: INFO: (4) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:1080/proxy/: test<... (200; 23.110964ms)
Jan 21 01:13:27.416: INFO: (5) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 5.879168ms)
Jan 21 01:13:27.416: INFO: (5) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:1080/proxy/: ... (200; 7.928966ms)
Jan 21 01:13:27.417: INFO: (5) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:1080/proxy/: test<... (200; 6.342261ms)
Jan 21 01:13:27.417: INFO: (5) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4/proxy/: test (200; 6.159066ms)
Jan 21 01:13:27.417: INFO: (5) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: ... (200; 7.039347ms)
Jan 21 01:13:27.430: INFO: (6) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 7.264336ms)
Jan 21 01:13:27.431: INFO: (6) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 8.366718ms)
Jan 21 01:13:27.432: INFO: (6) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname2/proxy/: tls qux (200; 9.632254ms)
Jan 21 01:13:27.432: INFO: (6) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: test<... (200; 11.881104ms)
Jan 21 01:13:27.435: INFO: (6) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4/proxy/: test (200; 12.02487ms)
Jan 21 01:13:27.435: INFO: (6) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 12.502274ms)
Jan 21 01:13:27.436: INFO: (6) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:460/proxy/: tls baz (200; 13.338488ms)
Jan 21 01:13:27.437: INFO: (6) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname1/proxy/: tls baz (200; 14.204247ms)
Jan 21 01:13:27.437: INFO: (6) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname1/proxy/: foo (200; 14.353834ms)
Jan 21 01:13:27.437: INFO: (6) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname1/proxy/: foo (200; 14.520059ms)
Jan 21 01:13:27.437: INFO: (6) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname2/proxy/: bar (200; 14.896494ms)
Jan 21 01:13:27.445: INFO: (7) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:1080/proxy/: ... (200; 7.942156ms)
Jan 21 01:13:27.448: INFO: (7) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 10.274569ms)
Jan 21 01:13:27.452: INFO: (7) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4/proxy/: test (200; 14.813328ms)
Jan 21 01:13:27.452: INFO: (7) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 15.07514ms)
Jan 21 01:13:27.453: INFO: (7) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 14.932753ms)
Jan 21 01:13:27.453: INFO: (7) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 14.845675ms)
Jan 21 01:13:27.453: INFO: (7) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname2/proxy/: bar (200; 14.94248ms)
Jan 21 01:13:27.453: INFO: (7) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname1/proxy/: tls baz (200; 15.271521ms)
Jan 21 01:13:27.453: INFO: (7) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname1/proxy/: foo (200; 15.072302ms)
Jan 21 01:13:27.453: INFO: (7) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:462/proxy/: tls qux (200; 15.094976ms)
Jan 21 01:13:27.453: INFO: (7) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname2/proxy/: bar (200; 15.343666ms)
Jan 21 01:13:27.453: INFO: (7) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:460/proxy/: tls baz (200; 15.140846ms)
Jan 21 01:13:27.453: INFO: (7) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: test<... (200; 15.382123ms)
Jan 21 01:13:27.454: INFO: (7) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname1/proxy/: foo (200; 16.010098ms)
Jan 21 01:13:27.459: INFO: (8) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:1080/proxy/: test<... (200; 5.43768ms)
Jan 21 01:13:27.459: INFO: (8) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 5.46723ms)
Jan 21 01:13:27.460: INFO: (8) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:462/proxy/: tls qux (200; 6.002305ms)
Jan 21 01:13:27.460: INFO: (8) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:1080/proxy/: ... (200; 6.480977ms)
Jan 21 01:13:27.463: INFO: (8) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4/proxy/: test (200; 8.758578ms)
Jan 21 01:13:27.463: INFO: (8) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:460/proxy/: tls baz (200; 9.187877ms)
Jan 21 01:13:27.463: INFO: (8) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 9.305084ms)
Jan 21 01:13:27.464: INFO: (8) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 9.937826ms)
Jan 21 01:13:27.464: INFO: (8) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: test (200; 9.338456ms)
Jan 21 01:13:27.487: INFO: (9) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname2/proxy/: tls qux (200; 10.304082ms)
Jan 21 01:13:27.488: INFO: (9) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname2/proxy/: bar (200; 10.586404ms)
Jan 21 01:13:27.488: INFO: (9) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname1/proxy/: foo (200; 10.456958ms)
Jan 21 01:13:27.488: INFO: (9) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname2/proxy/: bar (200; 10.672217ms)
Jan 21 01:13:27.488: INFO: (9) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname1/proxy/: tls baz (200; 10.747829ms)
Jan 21 01:13:27.488: INFO: (9) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:462/proxy/: tls qux (200; 10.892281ms)
Jan 21 01:13:27.488: INFO: (9) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: ... (200; 11.357764ms)
Jan 21 01:13:27.488: INFO: (9) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:1080/proxy/: test<... (200; 11.272712ms)
Jan 21 01:13:27.489: INFO: (9) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname1/proxy/: foo (200; 11.484707ms)
Jan 21 01:13:27.495: INFO: (10) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: test (200; 8.570384ms)
Jan 21 01:13:27.498: INFO: (10) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:1080/proxy/: ... (200; 9.57039ms)
Jan 21 01:13:27.498: INFO: (10) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:1080/proxy/: test<... (200; 9.376015ms)
Jan 21 01:13:27.498: INFO: (10) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 9.501638ms)
Jan 21 01:13:27.499: INFO: (10) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname1/proxy/: foo (200; 9.85591ms)
Jan 21 01:13:27.499: INFO: (10) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname2/proxy/: bar (200; 9.856055ms)
Jan 21 01:13:27.499: INFO: (10) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 10.164461ms)
Jan 21 01:13:27.499: INFO: (10) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname2/proxy/: tls qux (200; 10.430089ms)
Jan 21 01:13:27.506: INFO: (11) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 6.210086ms)
Jan 21 01:13:27.506: INFO: (11) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:1080/proxy/: test<... (200; 6.056088ms)
Jan 21 01:13:27.507: INFO: (11) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:1080/proxy/: ... (200; 7.128352ms)
Jan 21 01:13:27.508: INFO: (11) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 7.942451ms)
Jan 21 01:13:27.508: INFO: (11) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 7.755399ms)
Jan 21 01:13:27.508: INFO: (11) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname2/proxy/: bar (200; 8.837496ms)
Jan 21 01:13:27.508: INFO: (11) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: test (200; 9.547251ms)
Jan 21 01:13:27.510: INFO: (11) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname1/proxy/: tls baz (200; 9.669096ms)
Jan 21 01:13:27.510: INFO: (11) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname2/proxy/: bar (200; 10.675535ms)
Jan 21 01:13:27.510: INFO: (11) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname2/proxy/: tls qux (200; 10.118124ms)
Jan 21 01:13:27.510: INFO: (11) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 9.454077ms)
Jan 21 01:13:27.510: INFO: (11) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname1/proxy/: foo (200; 9.862302ms)
Jan 21 01:13:27.514: INFO: (12) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:462/proxy/: tls qux (200; 3.645791ms)
Jan 21 01:13:27.515: INFO: (12) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:1080/proxy/: test<... (200; 5.063442ms)
Jan 21 01:13:27.515: INFO: (12) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: ... (200; 14.4596ms)
Jan 21 01:13:27.525: INFO: (12) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 14.554762ms)
Jan 21 01:13:27.525: INFO: (12) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 14.802178ms)
Jan 21 01:13:27.525: INFO: (12) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4/proxy/: test (200; 14.662649ms)
Jan 21 01:13:27.525: INFO: (12) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname1/proxy/: tls baz (200; 14.700722ms)
Jan 21 01:13:27.525: INFO: (12) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname2/proxy/: tls qux (200; 14.789216ms)
Jan 21 01:13:27.525: INFO: (12) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 14.882363ms)
Jan 21 01:13:27.525: INFO: (12) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:460/proxy/: tls baz (200; 14.640645ms)
Jan 21 01:13:27.525: INFO: (12) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 14.633806ms)
Jan 21 01:13:27.526: INFO: (12) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname1/proxy/: foo (200; 15.599284ms)
Jan 21 01:13:27.526: INFO: (12) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname2/proxy/: bar (200; 15.570334ms)
Jan 21 01:13:27.526: INFO: (12) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname2/proxy/: bar (200; 15.46925ms)
Jan 21 01:13:27.533: INFO: (13) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: ... (200; 9.215109ms)
Jan 21 01:13:27.536: INFO: (13) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 9.827416ms)
Jan 21 01:13:27.536: INFO: (13) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 9.953827ms)
Jan 21 01:13:27.536: INFO: (13) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4/proxy/: test (200; 10.082361ms)
Jan 21 01:13:27.536: INFO: (13) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:460/proxy/: tls baz (200; 10.170426ms)
Jan 21 01:13:27.536: INFO: (13) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 9.979229ms)
Jan 21 01:13:27.536: INFO: (13) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:1080/proxy/: test<... (200; 10.240918ms)
Jan 21 01:13:27.536: INFO: (13) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname1/proxy/: tls baz (200; 10.035935ms)
Jan 21 01:13:27.537: INFO: (13) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:462/proxy/: tls qux (200; 10.869402ms)
Jan 21 01:13:27.537: INFO: (13) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname2/proxy/: bar (200; 10.89774ms)
Jan 21 01:13:27.537: INFO: (13) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname2/proxy/: tls qux (200; 10.946957ms)
Jan 21 01:13:27.537: INFO: (13) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname1/proxy/: foo (200; 10.942518ms)
Jan 21 01:13:27.537: INFO: (13) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname1/proxy/: foo (200; 11.218671ms)
Jan 21 01:13:27.538: INFO: (13) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname2/proxy/: bar (200; 11.930729ms)
Jan 21 01:13:27.544: INFO: (14) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 5.738002ms)
Jan 21 01:13:27.545: INFO: (14) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:1080/proxy/: test<... (200; 6.182931ms)
Jan 21 01:13:27.545: INFO: (14) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:462/proxy/: tls qux (200; 6.166334ms)
Jan 21 01:13:27.545: INFO: (14) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: ... (200; 8.437392ms)
Jan 21 01:13:27.547: INFO: (14) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 8.192524ms)
Jan 21 01:13:27.547: INFO: (14) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4/proxy/: test (200; 8.953772ms)
Jan 21 01:13:27.547: INFO: (14) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 8.994196ms)
Jan 21 01:13:27.547: INFO: (14) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname2/proxy/: bar (200; 8.962255ms)
Jan 21 01:13:27.549: INFO: (14) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname2/proxy/: tls qux (200; 10.738324ms)
Jan 21 01:13:27.549: INFO: (14) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname1/proxy/: foo (200; 11.003281ms)
Jan 21 01:13:27.549: INFO: (14) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname2/proxy/: bar (200; 10.923496ms)
Jan 21 01:13:27.549: INFO: (14) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:460/proxy/: tls baz (200; 10.978403ms)
Jan 21 01:13:27.549: INFO: (14) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname1/proxy/: tls baz (200; 10.783981ms)
Jan 21 01:13:27.550: INFO: (14) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname1/proxy/: foo (200; 11.147416ms)
Jan 21 01:13:27.553: INFO: (15) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:1080/proxy/: ... (200; 3.018303ms)
Jan 21 01:13:27.554: INFO: (15) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:462/proxy/: tls qux (200; 4.394096ms)
Jan 21 01:13:27.557: INFO: (15) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:460/proxy/: tls baz (200; 6.9987ms)
Jan 21 01:13:27.557: INFO: (15) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname1/proxy/: foo (200; 7.345925ms)
Jan 21 01:13:27.558: INFO: (15) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 7.654202ms)
Jan 21 01:13:27.560: INFO: (15) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4/proxy/: test (200; 9.942837ms)
Jan 21 01:13:27.560: INFO: (15) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname1/proxy/: tls baz (200; 10.213562ms)
Jan 21 01:13:27.560: INFO: (15) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname2/proxy/: bar (200; 10.708425ms)
Jan 21 01:13:27.560: INFO: (15) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 10.209292ms)
Jan 21 01:13:27.560: INFO: (15) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname1/proxy/: foo (200; 10.684298ms)
Jan 21 01:13:27.560: INFO: (15) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 10.35265ms)
Jan 21 01:13:27.560: INFO: (15) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: test<... (200; 10.578915ms)
Jan 21 01:13:27.564: INFO: (16) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 3.45795ms)
Jan 21 01:13:27.565: INFO: (16) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:1080/proxy/: ... (200; 4.25036ms)
Jan 21 01:13:27.569: INFO: (16) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname1/proxy/: foo (200; 7.888555ms)
Jan 21 01:13:27.570: INFO: (16) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:1080/proxy/: test<... (200; 8.577071ms)
Jan 21 01:13:27.570: INFO: (16) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 8.750003ms)
Jan 21 01:13:27.571: INFO: (16) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: test (200; 10.205952ms)
Jan 21 01:13:27.571: INFO: (16) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:462/proxy/: tls qux (200; 10.314433ms)
Jan 21 01:13:27.571: INFO: (16) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 10.357907ms)
Jan 21 01:13:27.571: INFO: (16) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 10.421181ms)
Jan 21 01:13:27.571: INFO: (16) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:460/proxy/: tls baz (200; 10.437162ms)
Jan 21 01:13:27.572: INFO: (16) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname2/proxy/: tls qux (200; 10.59532ms)
Jan 21 01:13:27.572: INFO: (16) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname2/proxy/: bar (200; 10.747334ms)
Jan 21 01:13:27.572: INFO: (16) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname1/proxy/: foo (200; 11.375756ms)
Jan 21 01:13:27.573: INFO: (16) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname2/proxy/: bar (200; 11.74164ms)
Jan 21 01:13:27.573: INFO: (16) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname1/proxy/: tls baz (200; 11.813805ms)
Jan 21 01:13:27.579: INFO: (17) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4/proxy/: test (200; 6.307602ms)
Jan 21 01:13:27.580: INFO: (17) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:462/proxy/: tls qux (200; 6.771748ms)
Jan 21 01:13:27.580: INFO: (17) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:1080/proxy/: test<... (200; 6.837575ms)
Jan 21 01:13:27.580: INFO: (17) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 6.793111ms)
Jan 21 01:13:27.581: INFO: (17) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:1080/proxy/: ... (200; 7.566ms)
Jan 21 01:13:27.581: INFO: (17) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: test<... (200; 17.175848ms)
Jan 21 01:13:27.604: INFO: (18) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: test (200; 17.80025ms)
Jan 21 01:13:27.604: INFO: (18) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:460/proxy/: tls baz (200; 17.892951ms)
Jan 21 01:13:27.604: INFO: (18) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname1/proxy/: tls baz (200; 18.081581ms)
Jan 21 01:13:27.604: INFO: (18) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 18.005311ms)
Jan 21 01:13:27.605: INFO: (18) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:1080/proxy/: ... (200; 18.338379ms)
Jan 21 01:13:27.605: INFO: (18) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 17.987177ms)
Jan 21 01:13:27.609: INFO: (19) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:1080/proxy/: ... (200; 3.987188ms)
Jan 21 01:13:27.610: INFO: (19) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 4.60293ms)
Jan 21 01:13:27.610: INFO: (19) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:443/proxy/: test (200; 4.843316ms)
Jan 21 01:13:27.613: INFO: (19) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:462/proxy/: tls qux (200; 7.327491ms)
Jan 21 01:13:27.614: INFO: (19) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:1080/proxy/: test<... (200; 8.76941ms)
Jan 21 01:13:27.615: INFO: (19) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 9.676222ms)
Jan 21 01:13:27.615: INFO: (19) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname1/proxy/: foo (200; 10.118348ms)
Jan 21 01:13:27.615: INFO: (19) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname2/proxy/: tls qux (200; 9.878305ms)
Jan 21 01:13:27.616: INFO: (19) /api/v1/namespaces/proxy-7248/pods/proxy-service-vqmgg-5wjj4:160/proxy/: foo (200; 10.374781ms)
Jan 21 01:13:27.616: INFO: (19) /api/v1/namespaces/proxy-7248/pods/http:proxy-service-vqmgg-5wjj4:162/proxy/: bar (200; 10.242764ms)
Jan 21 01:13:27.616: INFO: (19) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname1/proxy/: foo (200; 10.540166ms)
Jan 21 01:13:27.616: INFO: (19) /api/v1/namespaces/proxy-7248/pods/https:proxy-service-vqmgg-5wjj4:460/proxy/: tls baz (200; 10.328876ms)
Jan 21 01:13:27.617: INFO: (19) /api/v1/namespaces/proxy-7248/services/http:proxy-service-vqmgg:portname2/proxy/: bar (200; 12.124516ms)
Jan 21 01:13:27.617: INFO: (19) /api/v1/namespaces/proxy-7248/services/https:proxy-service-vqmgg:tlsportname1/proxy/: tls baz (200; 12.193233ms)
Jan 21 01:13:27.617: INFO: (19) /api/v1/namespaces/proxy-7248/services/proxy-service-vqmgg:portname2/proxy/: bar (200; 12.299717ms)
STEP: deleting ReplicationController proxy-service-vqmgg in namespace proxy-7248, will wait for the garbage collector to delete the pods
Jan 21 01:13:27.677: INFO: Deleting ReplicationController proxy-service-vqmgg took: 6.276074ms
Jan 21 01:13:27.977: INFO: Terminating ReplicationController proxy-service-vqmgg pods took: 300.510737ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:13:42.378: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-7248" for this suite.

• [SLOW TEST:32.411 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:58
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod  [Conformance]","total":278,"completed":225,"skipped":3588,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:13:42.413: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: starting the proxy server
Jan 21 01:13:42.483: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:13:42.590: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-4269" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0  [Conformance]","total":278,"completed":226,"skipped":3611,"failed":0}

------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:13:42.612: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir volume type on node default medium
Jan 21 01:13:42.880: INFO: Waiting up to 5m0s for pod "pod-227da3b3-cf36-4b37-8305-df1f4e61fb8a" in namespace "emptydir-9362" to be "success or failure"
Jan 21 01:13:42.905: INFO: Pod "pod-227da3b3-cf36-4b37-8305-df1f4e61fb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 24.581145ms
Jan 21 01:13:44.912: INFO: Pod "pod-227da3b3-cf36-4b37-8305-df1f4e61fb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031722609s
Jan 21 01:13:46.921: INFO: Pod "pod-227da3b3-cf36-4b37-8305-df1f4e61fb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03991226s
Jan 21 01:13:48.929: INFO: Pod "pod-227da3b3-cf36-4b37-8305-df1f4e61fb8a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048000198s
Jan 21 01:13:50.964: INFO: Pod "pod-227da3b3-cf36-4b37-8305-df1f4e61fb8a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.082929893s
STEP: Saw pod success
Jan 21 01:13:50.964: INFO: Pod "pod-227da3b3-cf36-4b37-8305-df1f4e61fb8a" satisfied condition "success or failure"
Jan 21 01:13:50.968: INFO: Trying to get logs from node jerma-node pod pod-227da3b3-cf36-4b37-8305-df1f4e61fb8a container test-container: 
STEP: delete the pod
Jan 21 01:13:51.042: INFO: Waiting for pod pod-227da3b3-cf36-4b37-8305-df1f4e61fb8a to disappear
Jan 21 01:13:51.048: INFO: Pod pod-227da3b3-cf36-4b37-8305-df1f4e61fb8a no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:13:51.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9362" for this suite.

• [SLOW TEST:8.507 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":227,"skipped":3611,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:13:51.121: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 01:13:51.206: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Jan 21 01:13:53.039: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:13:53.069: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-8971" for this suite.
•{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":228,"skipped":3631,"failed":0}
SSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:13:53.109: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 21 01:13:54.755: INFO: Waiting up to 5m0s for pod "downward-api-554fd45e-17e3-4aaf-a5aa-7508462cdab2" in namespace "downward-api-5977" to be "success or failure"
Jan 21 01:13:54.797: INFO: Pod "downward-api-554fd45e-17e3-4aaf-a5aa-7508462cdab2": Phase="Pending", Reason="", readiness=false. Elapsed: 42.311615ms
Jan 21 01:13:57.787: INFO: Pod "downward-api-554fd45e-17e3-4aaf-a5aa-7508462cdab2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.031953384s
Jan 21 01:14:00.501: INFO: Pod "downward-api-554fd45e-17e3-4aaf-a5aa-7508462cdab2": Phase="Pending", Reason="", readiness=false. Elapsed: 5.746215868s
Jan 21 01:14:02.769: INFO: Pod "downward-api-554fd45e-17e3-4aaf-a5aa-7508462cdab2": Phase="Pending", Reason="", readiness=false. Elapsed: 8.013533882s
Jan 21 01:14:04.779: INFO: Pod "downward-api-554fd45e-17e3-4aaf-a5aa-7508462cdab2": Phase="Pending", Reason="", readiness=false. Elapsed: 10.023393255s
Jan 21 01:14:06.785: INFO: Pod "downward-api-554fd45e-17e3-4aaf-a5aa-7508462cdab2": Phase="Pending", Reason="", readiness=false. Elapsed: 12.030138617s
Jan 21 01:14:08.792: INFO: Pod "downward-api-554fd45e-17e3-4aaf-a5aa-7508462cdab2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.036479903s
STEP: Saw pod success
Jan 21 01:14:08.792: INFO: Pod "downward-api-554fd45e-17e3-4aaf-a5aa-7508462cdab2" satisfied condition "success or failure"
Jan 21 01:14:08.795: INFO: Trying to get logs from node jerma-node pod downward-api-554fd45e-17e3-4aaf-a5aa-7508462cdab2 container dapi-container: 
STEP: delete the pod
Jan 21 01:14:08.831: INFO: Waiting for pod downward-api-554fd45e-17e3-4aaf-a5aa-7508462cdab2 to disappear
Jan 21 01:14:08.843: INFO: Pod downward-api-554fd45e-17e3-4aaf-a5aa-7508462cdab2 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:14:08.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5977" for this suite.

• [SLOW TEST:15.749 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3641,"failed":0}
SSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:14:08.860: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 21 01:14:08.988: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6fb8ca6b-a3cc-496c-b609-ae4202b36a20" in namespace "downward-api-5853" to be "success or failure"
Jan 21 01:14:09.026: INFO: Pod "downwardapi-volume-6fb8ca6b-a3cc-496c-b609-ae4202b36a20": Phase="Pending", Reason="", readiness=false. Elapsed: 37.719689ms
Jan 21 01:14:11.087: INFO: Pod "downwardapi-volume-6fb8ca6b-a3cc-496c-b609-ae4202b36a20": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099439995s
Jan 21 01:14:13.093: INFO: Pod "downwardapi-volume-6fb8ca6b-a3cc-496c-b609-ae4202b36a20": Phase="Pending", Reason="", readiness=false. Elapsed: 4.105331226s
Jan 21 01:14:15.100: INFO: Pod "downwardapi-volume-6fb8ca6b-a3cc-496c-b609-ae4202b36a20": Phase="Pending", Reason="", readiness=false. Elapsed: 6.11248919s
Jan 21 01:14:17.138: INFO: Pod "downwardapi-volume-6fb8ca6b-a3cc-496c-b609-ae4202b36a20": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.150127595s
STEP: Saw pod success
Jan 21 01:14:17.139: INFO: Pod "downwardapi-volume-6fb8ca6b-a3cc-496c-b609-ae4202b36a20" satisfied condition "success or failure"
Jan 21 01:14:17.160: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-6fb8ca6b-a3cc-496c-b609-ae4202b36a20 container client-container: 
STEP: delete the pod
Jan 21 01:14:17.375: INFO: Waiting for pod downwardapi-volume-6fb8ca6b-a3cc-496c-b609-ae4202b36a20 to disappear
Jan 21 01:14:17.541: INFO: Pod downwardapi-volume-6fb8ca6b-a3cc-496c-b609-ae4202b36a20 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:14:17.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5853" for this suite.

• [SLOW TEST:8.702 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":230,"skipped":3646,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:14:17.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test emptydir 0644 on node default medium
Jan 21 01:14:17.696: INFO: Waiting up to 5m0s for pod "pod-7a9601d7-3ccd-466f-96aa-1147595dac93" in namespace "emptydir-3193" to be "success or failure"
Jan 21 01:14:17.742: INFO: Pod "pod-7a9601d7-3ccd-466f-96aa-1147595dac93": Phase="Pending", Reason="", readiness=false. Elapsed: 45.238656ms
Jan 21 01:14:19.750: INFO: Pod "pod-7a9601d7-3ccd-466f-96aa-1147595dac93": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053554314s
Jan 21 01:14:21.759: INFO: Pod "pod-7a9601d7-3ccd-466f-96aa-1147595dac93": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063110584s
Jan 21 01:14:23.768: INFO: Pod "pod-7a9601d7-3ccd-466f-96aa-1147595dac93": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071725877s
Jan 21 01:14:25.778: INFO: Pod "pod-7a9601d7-3ccd-466f-96aa-1147595dac93": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.081751714s
STEP: Saw pod success
Jan 21 01:14:25.778: INFO: Pod "pod-7a9601d7-3ccd-466f-96aa-1147595dac93" satisfied condition "success or failure"
Jan 21 01:14:25.784: INFO: Trying to get logs from node jerma-node pod pod-7a9601d7-3ccd-466f-96aa-1147595dac93 container test-container: 
STEP: delete the pod
Jan 21 01:14:25.869: INFO: Waiting for pod pod-7a9601d7-3ccd-466f-96aa-1147595dac93 to disappear
Jan 21 01:14:25.878: INFO: Pod pod-7a9601d7-3ccd-466f-96aa-1147595dac93 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:14:25.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3193" for this suite.

• [SLOW TEST:8.361 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":231,"skipped":3667,"failed":0}
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:14:25.930: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:14:42.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-2626" for this suite.

• [SLOW TEST:16.518 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":232,"skipped":3678,"failed":0}
SSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:14:42.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 01:14:42.655: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Jan 21 01:14:46.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4781 create -f -'
Jan 21 01:14:48.981: INFO: stderr: ""
Jan 21 01:14:48.981: INFO: stdout: "e2e-test-crd-publish-openapi-4676-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan 21 01:14:48.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4781 delete e2e-test-crd-publish-openapi-4676-crds test-cr'
Jan 21 01:14:49.127: INFO: stderr: ""
Jan 21 01:14:49.127: INFO: stdout: "e2e-test-crd-publish-openapi-4676-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Jan 21 01:14:49.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4781 apply -f -'
Jan 21 01:14:49.473: INFO: stderr: ""
Jan 21 01:14:49.473: INFO: stdout: "e2e-test-crd-publish-openapi-4676-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Jan 21 01:14:49.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4781 delete e2e-test-crd-publish-openapi-4676-crds test-cr'
Jan 21 01:14:49.579: INFO: stderr: ""
Jan 21 01:14:49.579: INFO: stdout: "e2e-test-crd-publish-openapi-4676-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Jan 21 01:14:49.579: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4676-crds'
Jan 21 01:14:50.007: INFO: stderr: ""
Jan 21 01:14:50.007: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4676-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:14:53.918: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4781" for this suite.

• [SLOW TEST:11.482 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":233,"skipped":3682,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:14:53.932: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 21 01:14:54.661: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 21 01:14:56.676: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166094, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166094, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166094, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166094, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:14:58.686: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166094, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166094, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166094, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166094, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:15:00.684: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166094, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166094, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166094, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166094, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 21 01:15:03.722: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:15:04.016: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3025" for this suite.
STEP: Destroying namespace "webhook-3025-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:10.176 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":234,"skipped":3688,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:15:04.110: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:15:51.603: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-3324" for this suite.

• [SLOW TEST:47.508 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3705,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:15:51.620: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name configmap-test-volume-map-664bfbe7-88b7-4e2b-a29e-27e2b2e5cad4
STEP: Creating a pod to test consume configMaps
Jan 21 01:15:51.800: INFO: Waiting up to 5m0s for pod "pod-configmaps-207f75e6-8f73-401a-9e24-5a513b2d0031" in namespace "configmap-2810" to be "success or failure"
Jan 21 01:15:51.825: INFO: Pod "pod-configmaps-207f75e6-8f73-401a-9e24-5a513b2d0031": Phase="Pending", Reason="", readiness=false. Elapsed: 25.054614ms
Jan 21 01:15:53.832: INFO: Pod "pod-configmaps-207f75e6-8f73-401a-9e24-5a513b2d0031": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032216392s
Jan 21 01:15:55.841: INFO: Pod "pod-configmaps-207f75e6-8f73-401a-9e24-5a513b2d0031": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041072754s
Jan 21 01:15:57.850: INFO: Pod "pod-configmaps-207f75e6-8f73-401a-9e24-5a513b2d0031": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04967498s
Jan 21 01:15:59.862: INFO: Pod "pod-configmaps-207f75e6-8f73-401a-9e24-5a513b2d0031": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061854889s
STEP: Saw pod success
Jan 21 01:15:59.862: INFO: Pod "pod-configmaps-207f75e6-8f73-401a-9e24-5a513b2d0031" satisfied condition "success or failure"
Jan 21 01:15:59.867: INFO: Trying to get logs from node jerma-node pod pod-configmaps-207f75e6-8f73-401a-9e24-5a513b2d0031 container configmap-volume-test: 
STEP: delete the pod
Jan 21 01:15:59.931: INFO: Waiting for pod pod-configmaps-207f75e6-8f73-401a-9e24-5a513b2d0031 to disappear
Jan 21 01:15:59.947: INFO: Pod pod-configmaps-207f75e6-8f73-401a-9e24-5a513b2d0031 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:15:59.947: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2810" for this suite.

• [SLOW TEST:8.347 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":236,"skipped":3755,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:15:59.969: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: validating cluster-info
Jan 21 01:16:00.068: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Jan 21 01:16:00.186: INFO: stderr: ""
Jan 21 01:16:00.186: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.193:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:16:00.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7574" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info  [Conformance]","total":278,"completed":237,"skipped":3788,"failed":0}

------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:16:00.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 21 01:16:00.287: INFO: Waiting up to 5m0s for pod "downwardapi-volume-05ae7fa4-3604-4879-b586-d3b432eb7d06" in namespace "projected-1074" to be "success or failure"
Jan 21 01:16:00.347: INFO: Pod "downwardapi-volume-05ae7fa4-3604-4879-b586-d3b432eb7d06": Phase="Pending", Reason="", readiness=false. Elapsed: 59.621328ms
Jan 21 01:16:02.354: INFO: Pod "downwardapi-volume-05ae7fa4-3604-4879-b586-d3b432eb7d06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.065986417s
Jan 21 01:16:04.360: INFO: Pod "downwardapi-volume-05ae7fa4-3604-4879-b586-d3b432eb7d06": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07215957s
Jan 21 01:16:06.369: INFO: Pod "downwardapi-volume-05ae7fa4-3604-4879-b586-d3b432eb7d06": Phase="Pending", Reason="", readiness=false. Elapsed: 6.081262019s
Jan 21 01:16:08.378: INFO: Pod "downwardapi-volume-05ae7fa4-3604-4879-b586-d3b432eb7d06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.090382949s
STEP: Saw pod success
Jan 21 01:16:08.378: INFO: Pod "downwardapi-volume-05ae7fa4-3604-4879-b586-d3b432eb7d06" satisfied condition "success or failure"
Jan 21 01:16:08.384: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-05ae7fa4-3604-4879-b586-d3b432eb7d06 container client-container: 
STEP: delete the pod
Jan 21 01:16:08.444: INFO: Waiting for pod downwardapi-volume-05ae7fa4-3604-4879-b586-d3b432eb7d06 to disappear
Jan 21 01:16:08.457: INFO: Pod downwardapi-volume-05ae7fa4-3604-4879-b586-d3b432eb7d06 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:16:08.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1074" for this suite.

• [SLOW TEST:8.336 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3788,"failed":0}
SSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:16:08.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating configMap with name projected-configmap-test-volume-map-519d189c-cfe2-4bd8-ae90-ece29465deaa
STEP: Creating a pod to test consume configMaps
Jan 21 01:16:08.685: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-bd043123-7e2f-40af-b430-de4e647d85b4" in namespace "projected-503" to be "success or failure"
Jan 21 01:16:08.698: INFO: Pod "pod-projected-configmaps-bd043123-7e2f-40af-b430-de4e647d85b4": Phase="Pending", Reason="", readiness=false. Elapsed: 12.681635ms
Jan 21 01:16:10.706: INFO: Pod "pod-projected-configmaps-bd043123-7e2f-40af-b430-de4e647d85b4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020596668s
Jan 21 01:16:12.717: INFO: Pod "pod-projected-configmaps-bd043123-7e2f-40af-b430-de4e647d85b4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031265792s
Jan 21 01:16:14.727: INFO: Pod "pod-projected-configmaps-bd043123-7e2f-40af-b430-de4e647d85b4": Phase="Pending", Reason="", readiness=false. Elapsed: 6.040949942s
Jan 21 01:16:16.733: INFO: Pod "pod-projected-configmaps-bd043123-7e2f-40af-b430-de4e647d85b4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.047415324s
STEP: Saw pod success
Jan 21 01:16:16.733: INFO: Pod "pod-projected-configmaps-bd043123-7e2f-40af-b430-de4e647d85b4" satisfied condition "success or failure"
Jan 21 01:16:16.738: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-bd043123-7e2f-40af-b430-de4e647d85b4 container projected-configmap-volume-test: 
STEP: delete the pod
Jan 21 01:16:16.824: INFO: Waiting for pod pod-projected-configmaps-bd043123-7e2f-40af-b430-de4e647d85b4 to disappear
Jan 21 01:16:16.830: INFO: Pod pod-projected-configmaps-bd043123-7e2f-40af-b430-de4e647d85b4 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:16:16.830: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-503" for this suite.

• [SLOW TEST:8.312 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34
  should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3794,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:16:16.845: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133
[It] should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 01:16:17.196: INFO: Create a RollingUpdate DaemonSet
Jan 21 01:16:17.201: INFO: Check that daemon pods launch on every node of the cluster
Jan 21 01:16:17.281: INFO: Number of nodes with available pods: 0
Jan 21 01:16:17.281: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:16:18.441: INFO: Number of nodes with available pods: 0
Jan 21 01:16:18.441: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:16:19.529: INFO: Number of nodes with available pods: 0
Jan 21 01:16:19.529: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:16:20.294: INFO: Number of nodes with available pods: 0
Jan 21 01:16:20.294: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:16:21.298: INFO: Number of nodes with available pods: 0
Jan 21 01:16:21.298: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:16:23.120: INFO: Number of nodes with available pods: 0
Jan 21 01:16:23.121: INFO: Node jerma-node is running more than one daemon pod
Jan 21 01:16:23.995: INFO: Number of nodes with available pods: 1
Jan 21 01:16:23.996: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 01:16:24.808: INFO: Number of nodes with available pods: 1
Jan 21 01:16:24.808: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 01:16:25.291: INFO: Number of nodes with available pods: 1
Jan 21 01:16:25.291: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 01:16:26.297: INFO: Number of nodes with available pods: 1
Jan 21 01:16:26.297: INFO: Node jerma-server-mvvl6gufaqub is running more than one daemon pod
Jan 21 01:16:27.296: INFO: Number of nodes with available pods: 2
Jan 21 01:16:27.296: INFO: Number of running nodes: 2, number of available pods: 2
Jan 21 01:16:27.296: INFO: Update the DaemonSet to trigger a rollout
Jan 21 01:16:27.314: INFO: Updating DaemonSet daemon-set
Jan 21 01:16:43.360: INFO: Roll back the DaemonSet before rollout is complete
Jan 21 01:16:43.366: INFO: Updating DaemonSet daemon-set
Jan 21 01:16:43.366: INFO: Make sure DaemonSet rollback is complete
Jan 21 01:16:43.382: INFO: Wrong image for pod: daemon-set-2hffz. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 21 01:16:43.382: INFO: Pod daemon-set-2hffz is not available
Jan 21 01:16:44.652: INFO: Wrong image for pod: daemon-set-2hffz. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 21 01:16:44.652: INFO: Pod daemon-set-2hffz is not available
Jan 21 01:16:45.471: INFO: Wrong image for pod: daemon-set-2hffz. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent.
Jan 21 01:16:45.471: INFO: Pod daemon-set-2hffz is not available
Jan 21 01:16:46.541: INFO: Pod daemon-set-ccqq8 is not available
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-922, will wait for the garbage collector to delete the pods
Jan 21 01:16:46.620: INFO: Deleting DaemonSet.extensions daemon-set took: 10.490231ms
Jan 21 01:16:47.220: INFO: Terminating DaemonSet.extensions daemon-set pods took: 600.721473ms
Jan 21 01:17:02.428: INFO: Number of nodes with available pods: 0
Jan 21 01:17:02.428: INFO: Number of running nodes: 0, number of available pods: 0
Jan 21 01:17:02.431: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-922/daemonsets","resourceVersion":"3309354"},"items":null}

Jan 21 01:17:02.434: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-922/pods","resourceVersion":"3309354"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:17:02.444: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-922" for this suite.

• [SLOW TEST:45.608 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should rollback without unnecessary restarts [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":240,"skipped":3824,"failed":0}
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:17:02.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Jan 21 01:17:11.673: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:17:12.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-5839" for this suite.

• [SLOW TEST:10.273 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":241,"skipped":3840,"failed":0}
SSSSSSSSSSS
------------------------------
[k8s.io] KubeletManagedEtcHosts 
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:17:12.729: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Setting up the test
STEP: Creating hostNetwork=false pod
STEP: Creating hostNetwork=true pod
STEP: Running the test
STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false
Jan 21 01:17:36.978: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7471 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 01:17:36.978: INFO: >>> kubeConfig: /root/.kube/config
I0121 01:17:37.034360       8 log.go:172] (0xc00490a4d0) (0xc001523540) Create stream
I0121 01:17:37.034587       8 log.go:172] (0xc00490a4d0) (0xc001523540) Stream added, broadcasting: 1
I0121 01:17:37.042703       8 log.go:172] (0xc00490a4d0) Reply frame received for 1
I0121 01:17:37.042743       8 log.go:172] (0xc00490a4d0) (0xc0023495e0) Create stream
I0121 01:17:37.042750       8 log.go:172] (0xc00490a4d0) (0xc0023495e0) Stream added, broadcasting: 3
I0121 01:17:37.044317       8 log.go:172] (0xc00490a4d0) Reply frame received for 3
I0121 01:17:37.044347       8 log.go:172] (0xc00490a4d0) (0xc000ad21e0) Create stream
I0121 01:17:37.044358       8 log.go:172] (0xc00490a4d0) (0xc000ad21e0) Stream added, broadcasting: 5
I0121 01:17:37.045864       8 log.go:172] (0xc00490a4d0) Reply frame received for 5
I0121 01:17:37.131888       8 log.go:172] (0xc00490a4d0) Data frame received for 3
I0121 01:17:37.131962       8 log.go:172] (0xc0023495e0) (3) Data frame handling
I0121 01:17:37.132004       8 log.go:172] (0xc0023495e0) (3) Data frame sent
I0121 01:17:37.197168       8 log.go:172] (0xc00490a4d0) Data frame received for 1
I0121 01:17:37.197200       8 log.go:172] (0xc001523540) (1) Data frame handling
I0121 01:17:37.197239       8 log.go:172] (0xc001523540) (1) Data frame sent
I0121 01:17:37.197296       8 log.go:172] (0xc00490a4d0) (0xc001523540) Stream removed, broadcasting: 1
I0121 01:17:37.197486       8 log.go:172] (0xc00490a4d0) (0xc0023495e0) Stream removed, broadcasting: 3
I0121 01:17:37.197681       8 log.go:172] (0xc00490a4d0) (0xc000ad21e0) Stream removed, broadcasting: 5
I0121 01:17:37.197730       8 log.go:172] (0xc00490a4d0) Go away received
I0121 01:17:37.197790       8 log.go:172] (0xc00490a4d0) (0xc001523540) Stream removed, broadcasting: 1
I0121 01:17:37.197811       8 log.go:172] (0xc00490a4d0) (0xc0023495e0) Stream removed, broadcasting: 3
I0121 01:17:37.197829       8 log.go:172] (0xc00490a4d0) (0xc000ad21e0) Stream removed, broadcasting: 5
Jan 21 01:17:37.197: INFO: Exec stderr: ""
Jan 21 01:17:37.198: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7471 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 01:17:37.198: INFO: >>> kubeConfig: /root/.kube/config
I0121 01:17:37.247470       8 log.go:172] (0xc002c36d10) (0xc000ad3680) Create stream
I0121 01:17:37.247541       8 log.go:172] (0xc002c36d10) (0xc000ad3680) Stream added, broadcasting: 1
I0121 01:17:37.252766       8 log.go:172] (0xc002c36d10) Reply frame received for 1
I0121 01:17:37.252824       8 log.go:172] (0xc002c36d10) (0xc00054ebe0) Create stream
I0121 01:17:37.252848       8 log.go:172] (0xc002c36d10) (0xc00054ebe0) Stream added, broadcasting: 3
I0121 01:17:37.256057       8 log.go:172] (0xc002c36d10) Reply frame received for 3
I0121 01:17:37.256100       8 log.go:172] (0xc002c36d10) (0xc000ffd860) Create stream
I0121 01:17:37.256120       8 log.go:172] (0xc002c36d10) (0xc000ffd860) Stream added, broadcasting: 5
I0121 01:17:37.258039       8 log.go:172] (0xc002c36d10) Reply frame received for 5
I0121 01:17:37.332638       8 log.go:172] (0xc002c36d10) Data frame received for 3
I0121 01:17:37.333041       8 log.go:172] (0xc00054ebe0) (3) Data frame handling
I0121 01:17:37.333125       8 log.go:172] (0xc00054ebe0) (3) Data frame sent
I0121 01:17:37.409155       8 log.go:172] (0xc002c36d10) (0xc00054ebe0) Stream removed, broadcasting: 3
I0121 01:17:37.409454       8 log.go:172] (0xc002c36d10) Data frame received for 1
I0121 01:17:37.409568       8 log.go:172] (0xc000ad3680) (1) Data frame handling
I0121 01:17:37.409624       8 log.go:172] (0xc002c36d10) (0xc000ffd860) Stream removed, broadcasting: 5
I0121 01:17:37.409684       8 log.go:172] (0xc000ad3680) (1) Data frame sent
I0121 01:17:37.409718       8 log.go:172] (0xc002c36d10) (0xc000ad3680) Stream removed, broadcasting: 1
I0121 01:17:37.409744       8 log.go:172] (0xc002c36d10) Go away received
I0121 01:17:37.410038       8 log.go:172] (0xc002c36d10) (0xc000ad3680) Stream removed, broadcasting: 1
I0121 01:17:37.410060       8 log.go:172] (0xc002c36d10) (0xc00054ebe0) Stream removed, broadcasting: 3
I0121 01:17:37.410070       8 log.go:172] (0xc002c36d10) (0xc000ffd860) Stream removed, broadcasting: 5
Jan 21 01:17:37.410: INFO: Exec stderr: ""
Jan 21 01:17:37.410: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7471 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 01:17:37.410: INFO: >>> kubeConfig: /root/.kube/config
I0121 01:17:37.457502       8 log.go:172] (0xc0046bc420) (0xc002349860) Create stream
I0121 01:17:37.457609       8 log.go:172] (0xc0046bc420) (0xc002349860) Stream added, broadcasting: 1
I0121 01:17:37.462971       8 log.go:172] (0xc0046bc420) Reply frame received for 1
I0121 01:17:37.463029       8 log.go:172] (0xc0046bc420) (0xc0012fc0a0) Create stream
I0121 01:17:37.463041       8 log.go:172] (0xc0046bc420) (0xc0012fc0a0) Stream added, broadcasting: 3
I0121 01:17:37.464663       8 log.go:172] (0xc0046bc420) Reply frame received for 3
I0121 01:17:37.464733       8 log.go:172] (0xc0046bc420) (0xc001523720) Create stream
I0121 01:17:37.464757       8 log.go:172] (0xc0046bc420) (0xc001523720) Stream added, broadcasting: 5
I0121 01:17:37.466749       8 log.go:172] (0xc0046bc420) Reply frame received for 5
I0121 01:17:37.538469       8 log.go:172] (0xc0046bc420) Data frame received for 3
I0121 01:17:37.538523       8 log.go:172] (0xc0012fc0a0) (3) Data frame handling
I0121 01:17:37.538586       8 log.go:172] (0xc0012fc0a0) (3) Data frame sent
I0121 01:17:37.598956       8 log.go:172] (0xc0046bc420) Data frame received for 1
I0121 01:17:37.599140       8 log.go:172] (0xc002349860) (1) Data frame handling
I0121 01:17:37.599198       8 log.go:172] (0xc002349860) (1) Data frame sent
I0121 01:17:37.599239       8 log.go:172] (0xc0046bc420) (0xc002349860) Stream removed, broadcasting: 1
I0121 01:17:37.599306       8 log.go:172] (0xc0046bc420) (0xc0012fc0a0) Stream removed, broadcasting: 3
I0121 01:17:37.599376       8 log.go:172] (0xc0046bc420) (0xc001523720) Stream removed, broadcasting: 5
I0121 01:17:37.599513       8 log.go:172] (0xc0046bc420) (0xc002349860) Stream removed, broadcasting: 1
I0121 01:17:37.599525       8 log.go:172] (0xc0046bc420) (0xc0012fc0a0) Stream removed, broadcasting: 3
I0121 01:17:37.599544       8 log.go:172] (0xc0046bc420) (0xc001523720) Stream removed, broadcasting: 5
Jan 21 01:17:37.599: INFO: Exec stderr: ""
I0121 01:17:37.599740       8 log.go:172] (0xc0046bc420) Go away received
Jan 21 01:17:37.599: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7471 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 01:17:37.599: INFO: >>> kubeConfig: /root/.kube/config
I0121 01:17:37.644999       8 log.go:172] (0xc0046bca50) (0xc002349ae0) Create stream
I0121 01:17:37.645128       8 log.go:172] (0xc0046bca50) (0xc002349ae0) Stream added, broadcasting: 1
I0121 01:17:37.651160       8 log.go:172] (0xc0046bca50) Reply frame received for 1
I0121 01:17:37.651235       8 log.go:172] (0xc0046bca50) (0xc002349b80) Create stream
I0121 01:17:37.651249       8 log.go:172] (0xc0046bca50) (0xc002349b80) Stream added, broadcasting: 3
I0121 01:17:37.653856       8 log.go:172] (0xc0046bca50) Reply frame received for 3
I0121 01:17:37.653901       8 log.go:172] (0xc0046bca50) (0xc0012fc320) Create stream
I0121 01:17:37.653915       8 log.go:172] (0xc0046bca50) (0xc0012fc320) Stream added, broadcasting: 5
I0121 01:17:37.655085       8 log.go:172] (0xc0046bca50) Reply frame received for 5
I0121 01:17:37.740580       8 log.go:172] (0xc0046bca50) Data frame received for 3
I0121 01:17:37.740905       8 log.go:172] (0xc002349b80) (3) Data frame handling
I0121 01:17:37.741424       8 log.go:172] (0xc002349b80) (3) Data frame sent
I0121 01:17:37.895797       8 log.go:172] (0xc0046bca50) (0xc002349b80) Stream removed, broadcasting: 3
I0121 01:17:37.896440       8 log.go:172] (0xc0046bca50) Data frame received for 1
I0121 01:17:37.896511       8 log.go:172] (0xc002349ae0) (1) Data frame handling
I0121 01:17:37.896621       8 log.go:172] (0xc002349ae0) (1) Data frame sent
I0121 01:17:37.896707       8 log.go:172] (0xc0046bca50) (0xc002349ae0) Stream removed, broadcasting: 1
I0121 01:17:37.897958       8 log.go:172] (0xc0046bca50) (0xc0012fc320) Stream removed, broadcasting: 5
I0121 01:17:37.898385       8 log.go:172] (0xc0046bca50) (0xc002349ae0) Stream removed, broadcasting: 1
I0121 01:17:37.898429       8 log.go:172] (0xc0046bca50) (0xc002349b80) Stream removed, broadcasting: 3
I0121 01:17:37.898452       8 log.go:172] (0xc0046bca50) (0xc0012fc320) Stream removed, broadcasting: 5
Jan 21 01:17:37.899: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount
Jan 21 01:17:37.899: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7471 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 01:17:37.899: INFO: >>> kubeConfig: /root/.kube/config
I0121 01:17:37.904828       8 log.go:172] (0xc0046bca50) Go away received
I0121 01:17:37.983358       8 log.go:172] (0xc002c36fd0) (0xc0012fc500) Create stream
I0121 01:17:37.983713       8 log.go:172] (0xc002c36fd0) (0xc0012fc500) Stream added, broadcasting: 1
I0121 01:17:37.996793       8 log.go:172] (0xc002c36fd0) Reply frame received for 1
I0121 01:17:37.997075       8 log.go:172] (0xc002c36fd0) (0xc001523d60) Create stream
I0121 01:17:37.997097       8 log.go:172] (0xc002c36fd0) (0xc001523d60) Stream added, broadcasting: 3
I0121 01:17:38.034680       8 log.go:172] (0xc002c36fd0) Reply frame received for 3
I0121 01:17:38.034911       8 log.go:172] (0xc002c36fd0) (0xc0012fc5a0) Create stream
I0121 01:17:38.034924       8 log.go:172] (0xc002c36fd0) (0xc0012fc5a0) Stream added, broadcasting: 5
I0121 01:17:38.036743       8 log.go:172] (0xc002c36fd0) Reply frame received for 5
I0121 01:17:38.099019       8 log.go:172] (0xc002c36fd0) Data frame received for 3
I0121 01:17:38.099080       8 log.go:172] (0xc001523d60) (3) Data frame handling
I0121 01:17:38.099103       8 log.go:172] (0xc001523d60) (3) Data frame sent
I0121 01:17:38.161782       8 log.go:172] (0xc002c36fd0) (0xc0012fc5a0) Stream removed, broadcasting: 5
I0121 01:17:38.162022       8 log.go:172] (0xc002c36fd0) Data frame received for 1
I0121 01:17:38.162065       8 log.go:172] (0xc0012fc500) (1) Data frame handling
I0121 01:17:38.162144       8 log.go:172] (0xc0012fc500) (1) Data frame sent
I0121 01:17:38.162493       8 log.go:172] (0xc002c36fd0) (0xc001523d60) Stream removed, broadcasting: 3
I0121 01:17:38.162601       8 log.go:172] (0xc002c36fd0) (0xc0012fc500) Stream removed, broadcasting: 1
I0121 01:17:38.163243       8 log.go:172] (0xc002c36fd0) Go away received
I0121 01:17:38.163642       8 log.go:172] (0xc002c36fd0) (0xc0012fc500) Stream removed, broadcasting: 1
I0121 01:17:38.163731       8 log.go:172] (0xc002c36fd0) (0xc001523d60) Stream removed, broadcasting: 3
I0121 01:17:38.163749       8 log.go:172] (0xc002c36fd0) (0xc0012fc5a0) Stream removed, broadcasting: 5
Jan 21 01:17:38.163: INFO: Exec stderr: ""
Jan 21 01:17:38.164: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7471 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 01:17:38.164: INFO: >>> kubeConfig: /root/.kube/config
I0121 01:17:38.212891       8 log.go:172] (0xc0046bd080) (0xc002349ea0) Create stream
I0121 01:17:38.213030       8 log.go:172] (0xc0046bd080) (0xc002349ea0) Stream added, broadcasting: 1
I0121 01:17:38.217234       8 log.go:172] (0xc0046bd080) Reply frame received for 1
I0121 01:17:38.217336       8 log.go:172] (0xc0046bd080) (0xc00054fea0) Create stream
I0121 01:17:38.217348       8 log.go:172] (0xc0046bd080) (0xc00054fea0) Stream added, broadcasting: 3
I0121 01:17:38.218801       8 log.go:172] (0xc0046bd080) Reply frame received for 3
I0121 01:17:38.218831       8 log.go:172] (0xc0046bd080) (0xc002349f40) Create stream
I0121 01:17:38.218844       8 log.go:172] (0xc0046bd080) (0xc002349f40) Stream added, broadcasting: 5
I0121 01:17:38.219755       8 log.go:172] (0xc0046bd080) Reply frame received for 5
I0121 01:17:38.277211       8 log.go:172] (0xc0046bd080) Data frame received for 3
I0121 01:17:38.277227       8 log.go:172] (0xc00054fea0) (3) Data frame handling
I0121 01:17:38.277242       8 log.go:172] (0xc00054fea0) (3) Data frame sent
I0121 01:17:38.364677       8 log.go:172] (0xc0046bd080) Data frame received for 1
I0121 01:17:38.364815       8 log.go:172] (0xc0046bd080) (0xc00054fea0) Stream removed, broadcasting: 3
I0121 01:17:38.364883       8 log.go:172] (0xc002349ea0) (1) Data frame handling
I0121 01:17:38.364909       8 log.go:172] (0xc002349ea0) (1) Data frame sent
I0121 01:17:38.364956       8 log.go:172] (0xc0046bd080) (0xc002349f40) Stream removed, broadcasting: 5
I0121 01:17:38.364981       8 log.go:172] (0xc0046bd080) (0xc002349ea0) Stream removed, broadcasting: 1
I0121 01:17:38.365005       8 log.go:172] (0xc0046bd080) Go away received
I0121 01:17:38.365394       8 log.go:172] (0xc0046bd080) (0xc002349ea0) Stream removed, broadcasting: 1
I0121 01:17:38.365422       8 log.go:172] (0xc0046bd080) (0xc00054fea0) Stream removed, broadcasting: 3
I0121 01:17:38.365464       8 log.go:172] (0xc0046bd080) (0xc002349f40) Stream removed, broadcasting: 5
Jan 21 01:17:38.365: INFO: Exec stderr: ""
STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true
Jan 21 01:17:38.365: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7471 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 01:17:38.365: INFO: >>> kubeConfig: /root/.kube/config
I0121 01:17:38.401819       8 log.go:172] (0xc00490ae70) (0xc0002d72c0) Create stream
I0121 01:17:38.401890       8 log.go:172] (0xc00490ae70) (0xc0002d72c0) Stream added, broadcasting: 1
I0121 01:17:38.407722       8 log.go:172] (0xc00490ae70) Reply frame received for 1
I0121 01:17:38.407828       8 log.go:172] (0xc00490ae70) (0xc000ea6000) Create stream
I0121 01:17:38.407842       8 log.go:172] (0xc00490ae70) (0xc000ea6000) Stream added, broadcasting: 3
I0121 01:17:38.409815       8 log.go:172] (0xc00490ae70) Reply frame received for 3
I0121 01:17:38.409870       8 log.go:172] (0xc00490ae70) (0xc0012a2000) Create stream
I0121 01:17:38.409882       8 log.go:172] (0xc00490ae70) (0xc0012a2000) Stream added, broadcasting: 5
I0121 01:17:38.411106       8 log.go:172] (0xc00490ae70) Reply frame received for 5
I0121 01:17:38.477770       8 log.go:172] (0xc00490ae70) Data frame received for 3
I0121 01:17:38.477956       8 log.go:172] (0xc000ea6000) (3) Data frame handling
I0121 01:17:38.478076       8 log.go:172] (0xc000ea6000) (3) Data frame sent
I0121 01:17:38.562809       8 log.go:172] (0xc00490ae70) (0xc000ea6000) Stream removed, broadcasting: 3
I0121 01:17:38.563429       8 log.go:172] (0xc00490ae70) Data frame received for 1
I0121 01:17:38.563536       8 log.go:172] (0xc00490ae70) (0xc0012a2000) Stream removed, broadcasting: 5
I0121 01:17:38.563587       8 log.go:172] (0xc0002d72c0) (1) Data frame handling
I0121 01:17:38.563622       8 log.go:172] (0xc0002d72c0) (1) Data frame sent
I0121 01:17:38.563636       8 log.go:172] (0xc00490ae70) (0xc0002d72c0) Stream removed, broadcasting: 1
I0121 01:17:38.563669       8 log.go:172] (0xc00490ae70) Go away received
I0121 01:17:38.564623       8 log.go:172] (0xc00490ae70) (0xc0002d72c0) Stream removed, broadcasting: 1
I0121 01:17:38.564826       8 log.go:172] (0xc00490ae70) (0xc000ea6000) Stream removed, broadcasting: 3
I0121 01:17:38.564890       8 log.go:172] (0xc00490ae70) (0xc0012a2000) Stream removed, broadcasting: 5
Jan 21 01:17:38.564: INFO: Exec stderr: ""
Jan 21 01:17:38.565: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7471 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 01:17:38.565: INFO: >>> kubeConfig: /root/.kube/config
I0121 01:17:38.616534       8 log.go:172] (0xc00490b4a0) (0xc00185a000) Create stream
I0121 01:17:38.616701       8 log.go:172] (0xc00490b4a0) (0xc00185a000) Stream added, broadcasting: 1
I0121 01:17:38.622316       8 log.go:172] (0xc00490b4a0) Reply frame received for 1
I0121 01:17:38.622401       8 log.go:172] (0xc00490b4a0) (0xc00133a000) Create stream
I0121 01:17:38.622416       8 log.go:172] (0xc00490b4a0) (0xc00133a000) Stream added, broadcasting: 3
I0121 01:17:38.624367       8 log.go:172] (0xc00490b4a0) Reply frame received for 3
I0121 01:17:38.624424       8 log.go:172] (0xc00490b4a0) (0xc0012a20a0) Create stream
I0121 01:17:38.624439       8 log.go:172] (0xc00490b4a0) (0xc0012a20a0) Stream added, broadcasting: 5
I0121 01:17:38.626051       8 log.go:172] (0xc00490b4a0) Reply frame received for 5
I0121 01:17:38.701901       8 log.go:172] (0xc00490b4a0) Data frame received for 3
I0121 01:17:38.701968       8 log.go:172] (0xc00133a000) (3) Data frame handling
I0121 01:17:38.701989       8 log.go:172] (0xc00133a000) (3) Data frame sent
I0121 01:17:38.782724       8 log.go:172] (0xc00490b4a0) Data frame received for 1
I0121 01:17:38.782827       8 log.go:172] (0xc00185a000) (1) Data frame handling
I0121 01:17:38.782867       8 log.go:172] (0xc00185a000) (1) Data frame sent
I0121 01:17:38.783954       8 log.go:172] (0xc00490b4a0) (0xc00185a000) Stream removed, broadcasting: 1
I0121 01:17:38.784105       8 log.go:172] (0xc00490b4a0) (0xc00133a000) Stream removed, broadcasting: 3
I0121 01:17:38.784146       8 log.go:172] (0xc00490b4a0) (0xc0012a20a0) Stream removed, broadcasting: 5
I0121 01:17:38.784271       8 log.go:172] (0xc00490b4a0) Go away received
I0121 01:17:38.784401       8 log.go:172] (0xc00490b4a0) (0xc00185a000) Stream removed, broadcasting: 1
I0121 01:17:38.784418       8 log.go:172] (0xc00490b4a0) (0xc00133a000) Stream removed, broadcasting: 3
I0121 01:17:38.784429       8 log.go:172] (0xc00490b4a0) (0xc0012a20a0) Stream removed, broadcasting: 5
Jan 21 01:17:38.784: INFO: Exec stderr: ""
Jan 21 01:17:38.784: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-7471 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 01:17:38.784: INFO: >>> kubeConfig: /root/.kube/config
I0121 01:17:38.830507       8 log.go:172] (0xc00490bad0) (0xc00185a780) Create stream
I0121 01:17:38.830711       8 log.go:172] (0xc00490bad0) (0xc00185a780) Stream added, broadcasting: 1
I0121 01:17:38.834317       8 log.go:172] (0xc00490bad0) Reply frame received for 1
I0121 01:17:38.834362       8 log.go:172] (0xc00490bad0) (0xc000ea6280) Create stream
I0121 01:17:38.834373       8 log.go:172] (0xc00490bad0) (0xc000ea6280) Stream added, broadcasting: 3
I0121 01:17:38.837024       8 log.go:172] (0xc00490bad0) Reply frame received for 3
I0121 01:17:38.837057       8 log.go:172] (0xc00490bad0) (0xc000ea6460) Create stream
I0121 01:17:38.837067       8 log.go:172] (0xc00490bad0) (0xc000ea6460) Stream added, broadcasting: 5
I0121 01:17:38.838491       8 log.go:172] (0xc00490bad0) Reply frame received for 5
I0121 01:17:38.936357       8 log.go:172] (0xc00490bad0) Data frame received for 3
I0121 01:17:38.936517       8 log.go:172] (0xc000ea6280) (3) Data frame handling
I0121 01:17:38.937024       8 log.go:172] (0xc000ea6280) (3) Data frame sent
I0121 01:17:39.013294       8 log.go:172] (0xc00490bad0) Data frame received for 1
I0121 01:17:39.013517       8 log.go:172] (0xc00185a780) (1) Data frame handling
I0121 01:17:39.013599       8 log.go:172] (0xc00185a780) (1) Data frame sent
I0121 01:17:39.014476       8 log.go:172] (0xc00490bad0) (0xc000ea6460) Stream removed, broadcasting: 5
I0121 01:17:39.014792       8 log.go:172] (0xc00490bad0) (0xc00185a780) Stream removed, broadcasting: 1
I0121 01:17:39.014899       8 log.go:172] (0xc00490bad0) (0xc000ea6280) Stream removed, broadcasting: 3
I0121 01:17:39.014976       8 log.go:172] (0xc00490bad0) Go away received
I0121 01:17:39.015329       8 log.go:172] (0xc00490bad0) (0xc00185a780) Stream removed, broadcasting: 1
I0121 01:17:39.015440       8 log.go:172] (0xc00490bad0) (0xc000ea6280) Stream removed, broadcasting: 3
I0121 01:17:39.015464       8 log.go:172] (0xc00490bad0) (0xc000ea6460) Stream removed, broadcasting: 5
Jan 21 01:17:39.015: INFO: Exec stderr: ""
Jan 21 01:17:39.015: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-7471 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 01:17:39.015: INFO: >>> kubeConfig: /root/.kube/config
I0121 01:17:39.050994       8 log.go:172] (0xc002c37600) (0xc0012fcbe0) Create stream
I0121 01:17:39.051087       8 log.go:172] (0xc002c37600) (0xc0012fcbe0) Stream added, broadcasting: 1
I0121 01:17:39.055435       8 log.go:172] (0xc002c37600) Reply frame received for 1
I0121 01:17:39.055622       8 log.go:172] (0xc002c37600) (0xc0012a21e0) Create stream
I0121 01:17:39.055640       8 log.go:172] (0xc002c37600) (0xc0012a21e0) Stream added, broadcasting: 3
I0121 01:17:39.057166       8 log.go:172] (0xc002c37600) Reply frame received for 3
I0121 01:17:39.057202       8 log.go:172] (0xc002c37600) (0xc000ea65a0) Create stream
I0121 01:17:39.057214       8 log.go:172] (0xc002c37600) (0xc000ea65a0) Stream added, broadcasting: 5
I0121 01:17:39.058491       8 log.go:172] (0xc002c37600) Reply frame received for 5
I0121 01:17:39.131410       8 log.go:172] (0xc002c37600) Data frame received for 3
I0121 01:17:39.131491       8 log.go:172] (0xc0012a21e0) (3) Data frame handling
I0121 01:17:39.131516       8 log.go:172] (0xc0012a21e0) (3) Data frame sent
I0121 01:17:39.192101       8 log.go:172] (0xc002c37600) Data frame received for 1
I0121 01:17:39.192202       8 log.go:172] (0xc0012fcbe0) (1) Data frame handling
I0121 01:17:39.192223       8 log.go:172] (0xc0012fcbe0) (1) Data frame sent
I0121 01:17:39.192265       8 log.go:172] (0xc002c37600) (0xc0012fcbe0) Stream removed, broadcasting: 1
I0121 01:17:39.192512       8 log.go:172] (0xc002c37600) (0xc0012a21e0) Stream removed, broadcasting: 3
I0121 01:17:39.192612       8 log.go:172] (0xc002c37600) (0xc000ea65a0) Stream removed, broadcasting: 5
I0121 01:17:39.192668       8 log.go:172] (0xc002c37600) Go away received
I0121 01:17:39.192803       8 log.go:172] (0xc002c37600) (0xc0012fcbe0) Stream removed, broadcasting: 1
I0121 01:17:39.192833       8 log.go:172] (0xc002c37600) (0xc0012a21e0) Stream removed, broadcasting: 3
I0121 01:17:39.192846       8 log.go:172] (0xc002c37600) (0xc000ea65a0) Stream removed, broadcasting: 5
Jan 21 01:17:39.192: INFO: Exec stderr: ""
[AfterEach] [k8s.io] KubeletManagedEtcHosts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:17:39.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "e2e-kubelet-etc-hosts-7471" for this suite.

• [SLOW TEST:26.481 seconds]
[k8s.io] KubeletManagedEtcHosts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":242,"skipped":3851,"failed":0}
SSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:17:39.211: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 01:17:39.278: INFO: Creating ReplicaSet my-hostname-basic-6aee9b03-77c0-4075-b9f8-ad24dfb92f34
Jan 21 01:17:39.305: INFO: Pod name my-hostname-basic-6aee9b03-77c0-4075-b9f8-ad24dfb92f34: Found 0 pods out of 1
Jan 21 01:17:44.428: INFO: Pod name my-hostname-basic-6aee9b03-77c0-4075-b9f8-ad24dfb92f34: Found 1 pods out of 1
Jan 21 01:17:44.428: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-6aee9b03-77c0-4075-b9f8-ad24dfb92f34" is running
Jan 21 01:17:46.460: INFO: Pod "my-hostname-basic-6aee9b03-77c0-4075-b9f8-ad24dfb92f34-57lgs" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-21 01:17:39 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-21 01:17:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6aee9b03-77c0-4075-b9f8-ad24dfb92f34]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-21 01:17:39 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-6aee9b03-77c0-4075-b9f8-ad24dfb92f34]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-01-21 01:17:39 +0000 UTC Reason: Message:}])
Jan 21 01:17:46.460: INFO: Trying to dial the pod
Jan 21 01:17:51.485: INFO: Controller my-hostname-basic-6aee9b03-77c0-4075-b9f8-ad24dfb92f34: Got expected result from replica 1 [my-hostname-basic-6aee9b03-77c0-4075-b9f8-ad24dfb92f34-57lgs]: "my-hostname-basic-6aee9b03-77c0-4075-b9f8-ad24dfb92f34-57lgs", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:17:51.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-4350" for this suite.

• [SLOW TEST:12.312 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image  [Conformance]","total":278,"completed":243,"skipped":3859,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:17:51.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Jan 21 01:17:51.657: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6622 /api/v1/namespaces/watch-6622/configmaps/e2e-watch-test-label-changed 67aa8892-44ed-4da9-8f61-3c21ecfcdafe 3309611 0 2020-01-21 01:17:51 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Jan 21 01:17:51.657: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6622 /api/v1/namespaces/watch-6622/configmaps/e2e-watch-test-label-changed 67aa8892-44ed-4da9-8f61-3c21ecfcdafe 3309612 0 2020-01-21 01:17:51 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Jan 21 01:17:51.657: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6622 /api/v1/namespaces/watch-6622/configmaps/e2e-watch-test-label-changed 67aa8892-44ed-4da9-8f61-3c21ecfcdafe 3309613 0 2020-01-21 01:17:51 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Jan 21 01:18:02.177: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6622 /api/v1/namespaces/watch-6622/configmaps/e2e-watch-test-label-changed 67aa8892-44ed-4da9-8f61-3c21ecfcdafe 3309651 0 2020-01-21 01:17:51 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 21 01:18:02.178: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6622 /api/v1/namespaces/watch-6622/configmaps/e2e-watch-test-label-changed 67aa8892-44ed-4da9-8f61-3c21ecfcdafe 3309652 0 2020-01-21 01:17:51 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Jan 21 01:18:02.178: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-6622 /api/v1/namespaces/watch-6622/configmaps/e2e-watch-test-label-changed 67aa8892-44ed-4da9-8f61-3c21ecfcdafe 3309653 0 2020-01-21 01:17:51 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:18:02.178: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-6622" for this suite.

• [SLOW TEST:10.668 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":244,"skipped":3906,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:18:02.195: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 21 01:18:02.941: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 21 01:18:05.384: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166282, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166282, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166283, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166282, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:18:07.393: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166282, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166282, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166283, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166282, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:18:09.401: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166282, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166282, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166283, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166282, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:18:11.392: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166282, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166282, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166283, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166282, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 21 01:18:14.439: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 01:18:14.450: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-5086-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:18:16.041: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2017" for this suite.
STEP: Destroying namespace "webhook-2017-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:14.156 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":245,"skipped":3932,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:18:16.353: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 01:18:16.569: INFO: Waiting up to 5m0s for pod "busybox-user-65534-829075e9-c919-4aa0-87b1-cabaa6f4ef38" in namespace "security-context-test-3590" to be "success or failure"
Jan 21 01:18:16.598: INFO: Pod "busybox-user-65534-829075e9-c919-4aa0-87b1-cabaa6f4ef38": Phase="Pending", Reason="", readiness=false. Elapsed: 28.361158ms
Jan 21 01:18:18.800: INFO: Pod "busybox-user-65534-829075e9-c919-4aa0-87b1-cabaa6f4ef38": Phase="Pending", Reason="", readiness=false. Elapsed: 2.230863013s
Jan 21 01:18:21.109: INFO: Pod "busybox-user-65534-829075e9-c919-4aa0-87b1-cabaa6f4ef38": Phase="Pending", Reason="", readiness=false. Elapsed: 4.539114703s
Jan 21 01:18:23.114: INFO: Pod "busybox-user-65534-829075e9-c919-4aa0-87b1-cabaa6f4ef38": Phase="Pending", Reason="", readiness=false. Elapsed: 6.544074957s
Jan 21 01:18:25.206: INFO: Pod "busybox-user-65534-829075e9-c919-4aa0-87b1-cabaa6f4ef38": Phase="Pending", Reason="", readiness=false. Elapsed: 8.636315375s
Jan 21 01:18:27.211: INFO: Pod "busybox-user-65534-829075e9-c919-4aa0-87b1-cabaa6f4ef38": Phase="Pending", Reason="", readiness=false. Elapsed: 10.641176508s
Jan 21 01:18:29.220: INFO: Pod "busybox-user-65534-829075e9-c919-4aa0-87b1-cabaa6f4ef38": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.650014971s
Jan 21 01:18:29.220: INFO: Pod "busybox-user-65534-829075e9-c919-4aa0-87b1-cabaa6f4ef38" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:18:29.220: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3590" for this suite.

• [SLOW TEST:12.880 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":3958,"failed":0}
SSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:18:29.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Jan 21 01:18:29.558: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4466 /api/v1/namespaces/watch-4466/configmaps/e2e-watch-test-resource-version ca15bd85-2094-46d7-b12a-439b8023b3a3 3309819 0 2020-01-21 01:18:29 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Jan 21 01:18:29.559: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4466 /api/v1/namespaces/watch-4466/configmaps/e2e-watch-test-resource-version ca15bd85-2094-46d7-b12a-439b8023b3a3 3309820 0 2020-01-21 01:18:29 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:18:29.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4466" for this suite.
•{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":247,"skipped":3964,"failed":0}
SSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:18:29.578: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:18:37.989: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1141" for this suite.

• [SLOW TEST:8.426 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":3969,"failed":0}
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:18:38.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Jan 21 01:18:38.109: INFO: >>> kubeConfig: /root/.kube/config
Jan 21 01:18:41.884: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:18:56.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3176" for this suite.

• [SLOW TEST:18.959 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":249,"skipped":3989,"failed":0}
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:18:56.964: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 01:18:57.157: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:19:03.443: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1597" for this suite.

• [SLOW TEST:6.507 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":3989,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:19:03.472: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with secret pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-secret-xzp6
STEP: Creating a pod to test atomic-volume-subpath
Jan 21 01:19:03.646: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-xzp6" in namespace "subpath-1500" to be "success or failure"
Jan 21 01:19:03.718: INFO: Pod "pod-subpath-test-secret-xzp6": Phase="Pending", Reason="", readiness=false. Elapsed: 71.712505ms
Jan 21 01:19:05.727: INFO: Pod "pod-subpath-test-secret-xzp6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080809918s
Jan 21 01:19:07.735: INFO: Pod "pod-subpath-test-secret-xzp6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.088778842s
Jan 21 01:19:09.744: INFO: Pod "pod-subpath-test-secret-xzp6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.097512906s
Jan 21 01:19:11.750: INFO: Pod "pod-subpath-test-secret-xzp6": Phase="Running", Reason="", readiness=true. Elapsed: 8.103516695s
Jan 21 01:19:13.759: INFO: Pod "pod-subpath-test-secret-xzp6": Phase="Running", Reason="", readiness=true. Elapsed: 10.112469965s
Jan 21 01:19:15.765: INFO: Pod "pod-subpath-test-secret-xzp6": Phase="Running", Reason="", readiness=true. Elapsed: 12.118629705s
Jan 21 01:19:17.770: INFO: Pod "pod-subpath-test-secret-xzp6": Phase="Running", Reason="", readiness=true. Elapsed: 14.123338708s
Jan 21 01:19:19.779: INFO: Pod "pod-subpath-test-secret-xzp6": Phase="Running", Reason="", readiness=true. Elapsed: 16.132846657s
Jan 21 01:19:21.790: INFO: Pod "pod-subpath-test-secret-xzp6": Phase="Running", Reason="", readiness=true. Elapsed: 18.143160662s
Jan 21 01:19:23.800: INFO: Pod "pod-subpath-test-secret-xzp6": Phase="Running", Reason="", readiness=true. Elapsed: 20.153299562s
Jan 21 01:19:25.810: INFO: Pod "pod-subpath-test-secret-xzp6": Phase="Running", Reason="", readiness=true. Elapsed: 22.163891739s
Jan 21 01:19:27.821: INFO: Pod "pod-subpath-test-secret-xzp6": Phase="Running", Reason="", readiness=true. Elapsed: 24.17488964s
Jan 21 01:19:29.845: INFO: Pod "pod-subpath-test-secret-xzp6": Phase="Running", Reason="", readiness=true. Elapsed: 26.198935257s
Jan 21 01:19:31.860: INFO: Pod "pod-subpath-test-secret-xzp6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.213752946s
STEP: Saw pod success
Jan 21 01:19:31.861: INFO: Pod "pod-subpath-test-secret-xzp6" satisfied condition "success or failure"
Jan 21 01:19:31.868: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-xzp6 container test-container-subpath-secret-xzp6: 
STEP: delete the pod
Jan 21 01:19:31.954: INFO: Waiting for pod pod-subpath-test-secret-xzp6 to disappear
Jan 21 01:19:32.034: INFO: Pod pod-subpath-test-secret-xzp6 no longer exists
STEP: Deleting pod pod-subpath-test-secret-xzp6
Jan 21 01:19:32.034: INFO: Deleting pod "pod-subpath-test-secret-xzp6" in namespace "subpath-1500"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:19:32.037: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-1500" for this suite.

• [SLOW TEST:28.578 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with secret pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":251,"skipped":4013,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:19:32.051: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687
[It] should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service externalname-service with the type=ExternalName in namespace services-7709
STEP: changing the ExternalName service to type=ClusterIP
STEP: creating replication controller externalname-service in namespace services-7709
I0121 01:19:32.263764       8 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7709, replica count: 2
I0121 01:19:35.316777       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:19:38.318083       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:19:41.319225       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:19:44.320041       8 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 21 01:19:44.320: INFO: Creating new exec pod
Jan 21 01:19:53.350: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7709 execpodtkjn4 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Jan 21 01:19:53.939: INFO: stderr: "I0121 01:19:53.608352    4271 log.go:172] (0xc000a0a000) (0xc00062c6e0) Create stream\nI0121 01:19:53.608658    4271 log.go:172] (0xc000a0a000) (0xc00062c6e0) Stream added, broadcasting: 1\nI0121 01:19:53.613566    4271 log.go:172] (0xc000a0a000) Reply frame received for 1\nI0121 01:19:53.613694    4271 log.go:172] (0xc000a0a000) (0xc000439360) Create stream\nI0121 01:19:53.613714    4271 log.go:172] (0xc000a0a000) (0xc000439360) Stream added, broadcasting: 3\nI0121 01:19:53.619469    4271 log.go:172] (0xc000a0a000) Reply frame received for 3\nI0121 01:19:53.619632    4271 log.go:172] (0xc000a0a000) (0xc000904000) Create stream\nI0121 01:19:53.619674    4271 log.go:172] (0xc000a0a000) (0xc000904000) Stream added, broadcasting: 5\nI0121 01:19:53.622075    4271 log.go:172] (0xc000a0a000) Reply frame received for 5\nI0121 01:19:53.713005    4271 log.go:172] (0xc000a0a000) Data frame received for 5\nI0121 01:19:53.713113    4271 log.go:172] (0xc000904000) (5) Data frame handling\nI0121 01:19:53.713133    4271 log.go:172] (0xc000904000) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0121 01:19:53.722953    4271 log.go:172] (0xc000a0a000) Data frame received for 5\nI0121 01:19:53.723163    4271 log.go:172] (0xc000904000) (5) Data frame handling\nI0121 01:19:53.723249    4271 log.go:172] (0xc000904000) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0121 01:19:53.901064    4271 log.go:172] (0xc000a0a000) Data frame received for 1\nI0121 01:19:53.901384    4271 log.go:172] (0xc00062c6e0) (1) Data frame handling\nI0121 01:19:53.901487    4271 log.go:172] (0xc00062c6e0) (1) Data frame sent\nI0121 01:19:53.903379    4271 log.go:172] (0xc000a0a000) (0xc00062c6e0) Stream removed, broadcasting: 1\nI0121 01:19:53.905938    4271 log.go:172] (0xc000a0a000) (0xc000439360) Stream removed, broadcasting: 3\nI0121 01:19:53.906014    4271 log.go:172] (0xc000a0a000) (0xc000904000) Stream removed, broadcasting: 5\nI0121 01:19:53.906098    4271 log.go:172] (0xc000a0a000) (0xc00062c6e0) Stream removed, broadcasting: 1\nI0121 01:19:53.906112    4271 log.go:172] (0xc000a0a000) (0xc000439360) Stream removed, broadcasting: 3\nI0121 01:19:53.906134    4271 log.go:172] (0xc000a0a000) (0xc000904000) Stream removed, broadcasting: 5\n"
Jan 21 01:19:53.940: INFO: stdout: ""
Jan 21 01:19:53.945: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7709 execpodtkjn4 -- /bin/sh -x -c nc -zv -t -w 2 10.96.198.35 80'
Jan 21 01:19:54.437: INFO: stderr: "I0121 01:19:54.203823    4294 log.go:172] (0xc000ad8dc0) (0xc00099e3c0) Create stream\nI0121 01:19:54.204168    4294 log.go:172] (0xc000ad8dc0) (0xc00099e3c0) Stream added, broadcasting: 1\nI0121 01:19:54.227442    4294 log.go:172] (0xc000ad8dc0) Reply frame received for 1\nI0121 01:19:54.227671    4294 log.go:172] (0xc000ad8dc0) (0xc00068a640) Create stream\nI0121 01:19:54.227709    4294 log.go:172] (0xc000ad8dc0) (0xc00068a640) Stream added, broadcasting: 3\nI0121 01:19:54.232674    4294 log.go:172] (0xc000ad8dc0) Reply frame received for 3\nI0121 01:19:54.232823    4294 log.go:172] (0xc000ad8dc0) (0xc0005232c0) Create stream\nI0121 01:19:54.232861    4294 log.go:172] (0xc000ad8dc0) (0xc0005232c0) Stream added, broadcasting: 5\nI0121 01:19:54.235647    4294 log.go:172] (0xc000ad8dc0) Reply frame received for 5\nI0121 01:19:54.357016    4294 log.go:172] (0xc000ad8dc0) Data frame received for 5\nI0121 01:19:54.357137    4294 log.go:172] (0xc0005232c0) (5) Data frame handling\nI0121 01:19:54.357185    4294 log.go:172] (0xc0005232c0) (5) Data frame sent\nI0121 01:19:54.357202    4294 log.go:172] (0xc000ad8dc0) Data frame received for 5\nI0121 01:19:54.357214    4294 log.go:172] (0xc0005232c0) (5) Data frame handling\n+ nc -zv -t -w 2 10.96.198.35 80\nConnection to 10.96.198.35 80 port [tcp/http] succeeded!\nI0121 01:19:54.357267    4294 log.go:172] (0xc0005232c0) (5) Data frame sent\nI0121 01:19:54.424871    4294 log.go:172] (0xc000ad8dc0) (0xc00068a640) Stream removed, broadcasting: 3\nI0121 01:19:54.425036    4294 log.go:172] (0xc000ad8dc0) Data frame received for 1\nI0121 01:19:54.425045    4294 log.go:172] (0xc00099e3c0) (1) Data frame handling\nI0121 01:19:54.425056    4294 log.go:172] (0xc00099e3c0) (1) Data frame sent\nI0121 01:19:54.425064    4294 log.go:172] (0xc000ad8dc0) (0xc00099e3c0) Stream removed, broadcasting: 1\nI0121 01:19:54.425404    4294 log.go:172] (0xc000ad8dc0) (0xc0005232c0) Stream removed, broadcasting: 5\nI0121 01:19:54.425515    4294 log.go:172] (0xc000ad8dc0) Go away received\nI0121 01:19:54.426249    4294 log.go:172] (0xc000ad8dc0) (0xc00099e3c0) Stream removed, broadcasting: 1\nI0121 01:19:54.426369    4294 log.go:172] (0xc000ad8dc0) (0xc00068a640) Stream removed, broadcasting: 3\nI0121 01:19:54.426395    4294 log.go:172] (0xc000ad8dc0) (0xc0005232c0) Stream removed, broadcasting: 5\n"
Jan 21 01:19:54.437: INFO: stdout: ""
Jan 21 01:19:54.437: INFO: Cleaning up the ExternalName to ClusterIP test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:19:54.619: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-7709" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691

• [SLOW TEST:22.609 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to ClusterIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":252,"skipped":4030,"failed":0}
SSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:19:54.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: getting the auto-created API token
STEP: reading a file in the container
Jan 21 01:20:03.570: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4906 pod-service-account-0a097a93-7a80-4449-a3e6-c5c17d829ba3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Jan 21 01:20:04.219: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4906 pod-service-account-0a097a93-7a80-4449-a3e6-c5c17d829ba3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Jan 21 01:20:04.654: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-4906 pod-service-account-0a097a93-7a80-4449-a3e6-c5c17d829ba3 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:20:05.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-4906" for this suite.

• [SLOW TEST:10.423 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods  [Conformance]","total":278,"completed":253,"skipped":4039,"failed":0}
SSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:20:05.086: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687
[It] should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service nodeport-test with type=NodePort in namespace services-2266
STEP: creating replication controller nodeport-test in namespace services-2266
I0121 01:20:05.455952       8 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-2266, replica count: 2
I0121 01:20:08.507314       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:20:11.508163       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:20:14.509364       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:20:17.510316       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:20:20.511536       8 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Jan 21 01:20:20.512: INFO: Creating new exec pod
Jan 21 01:20:27.573: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2266 execpodbwpl9 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80'
Jan 21 01:20:27.993: INFO: stderr: "I0121 01:20:27.784038    4371 log.go:172] (0xc0009a4e70) (0xc0008b4640) Create stream\nI0121 01:20:27.784340    4371 log.go:172] (0xc0009a4e70) (0xc0008b4640) Stream added, broadcasting: 1\nI0121 01:20:27.795007    4371 log.go:172] (0xc0009a4e70) Reply frame received for 1\nI0121 01:20:27.795297    4371 log.go:172] (0xc0009a4e70) (0xc0005f3cc0) Create stream\nI0121 01:20:27.795335    4371 log.go:172] (0xc0009a4e70) (0xc0005f3cc0) Stream added, broadcasting: 3\nI0121 01:20:27.797613    4371 log.go:172] (0xc0009a4e70) Reply frame received for 3\nI0121 01:20:27.797652    4371 log.go:172] (0xc0009a4e70) (0xc0005b68c0) Create stream\nI0121 01:20:27.797670    4371 log.go:172] (0xc0009a4e70) (0xc0005b68c0) Stream added, broadcasting: 5\nI0121 01:20:27.800712    4371 log.go:172] (0xc0009a4e70) Reply frame received for 5\nI0121 01:20:27.885968    4371 log.go:172] (0xc0009a4e70) Data frame received for 5\nI0121 01:20:27.886046    4371 log.go:172] (0xc0005b68c0) (5) Data frame handling\nI0121 01:20:27.886077    4371 log.go:172] (0xc0005b68c0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0121 01:20:27.909417    4371 log.go:172] (0xc0009a4e70) Data frame received for 5\nI0121 01:20:27.909544    4371 log.go:172] (0xc0005b68c0) (5) Data frame handling\nI0121 01:20:27.909646    4371 log.go:172] (0xc0005b68c0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0121 01:20:27.984827    4371 log.go:172] (0xc0009a4e70) (0xc0005b68c0) Stream removed, broadcasting: 5\nI0121 01:20:27.984948    4371 log.go:172] (0xc0009a4e70) Data frame received for 1\nI0121 01:20:27.984997    4371 log.go:172] (0xc0009a4e70) (0xc0005f3cc0) Stream removed, broadcasting: 3\nI0121 01:20:27.985035    4371 log.go:172] (0xc0008b4640) (1) Data frame handling\nI0121 01:20:27.985059    4371 log.go:172] (0xc0008b4640) (1) Data frame sent\nI0121 01:20:27.985078    4371 log.go:172] (0xc0009a4e70) (0xc0008b4640) Stream removed, broadcasting: 1\nI0121 01:20:27.985097    4371 log.go:172] (0xc0009a4e70) Go away received\nI0121 01:20:27.986303    4371 log.go:172] (0xc0009a4e70) (0xc0008b4640) Stream removed, broadcasting: 1\nI0121 01:20:27.986316    4371 log.go:172] (0xc0009a4e70) (0xc0005f3cc0) Stream removed, broadcasting: 3\nI0121 01:20:27.986321    4371 log.go:172] (0xc0009a4e70) (0xc0005b68c0) Stream removed, broadcasting: 5\n"
Jan 21 01:20:27.993: INFO: stdout: ""
Jan 21 01:20:27.995: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2266 execpodbwpl9 -- /bin/sh -x -c nc -zv -t -w 2 10.96.249.77 80'
Jan 21 01:20:28.276: INFO: stderr: "I0121 01:20:28.134534    4391 log.go:172] (0xc000a180b0) (0xc000661f40) Create stream\nI0121 01:20:28.134912    4391 log.go:172] (0xc000a180b0) (0xc000661f40) Stream added, broadcasting: 1\nI0121 01:20:28.140256    4391 log.go:172] (0xc000a180b0) Reply frame received for 1\nI0121 01:20:28.140308    4391 log.go:172] (0xc000a180b0) (0xc0006048c0) Create stream\nI0121 01:20:28.140319    4391 log.go:172] (0xc000a180b0) (0xc0006048c0) Stream added, broadcasting: 3\nI0121 01:20:28.141691    4391 log.go:172] (0xc000a180b0) Reply frame received for 3\nI0121 01:20:28.141714    4391 log.go:172] (0xc000a180b0) (0xc000425540) Create stream\nI0121 01:20:28.141724    4391 log.go:172] (0xc000a180b0) (0xc000425540) Stream added, broadcasting: 5\nI0121 01:20:28.142803    4391 log.go:172] (0xc000a180b0) Reply frame received for 5\nI0121 01:20:28.206478    4391 log.go:172] (0xc000a180b0) Data frame received for 5\nI0121 01:20:28.206539    4391 log.go:172] (0xc000425540) (5) Data frame handling\nI0121 01:20:28.206561    4391 log.go:172] (0xc000425540) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.249.77 80\nI0121 01:20:28.211393    4391 log.go:172] (0xc000a180b0) Data frame received for 5\nI0121 01:20:28.211413    4391 log.go:172] (0xc000425540) (5) Data frame handling\nI0121 01:20:28.211420    4391 log.go:172] (0xc000425540) (5) Data frame sent\nConnection to 10.96.249.77 80 port [tcp/http] succeeded!\nI0121 01:20:28.270369    4391 log.go:172] (0xc000a180b0) (0xc000425540) Stream removed, broadcasting: 5\nI0121 01:20:28.270479    4391 log.go:172] (0xc000a180b0) Data frame received for 1\nI0121 01:20:28.270498    4391 log.go:172] (0xc000a180b0) (0xc0006048c0) Stream removed, broadcasting: 3\nI0121 01:20:28.270516    4391 log.go:172] (0xc000661f40) (1) Data frame handling\nI0121 01:20:28.270522    4391 log.go:172] (0xc000661f40) (1) Data frame sent\nI0121 01:20:28.270527    4391 log.go:172] (0xc000a180b0) (0xc000661f40) Stream removed, broadcasting: 1\nI0121 01:20:28.270536    4391 log.go:172] (0xc000a180b0) Go away received\nI0121 01:20:28.271299    4391 log.go:172] (0xc000a180b0) (0xc000661f40) Stream removed, broadcasting: 1\nI0121 01:20:28.271315    4391 log.go:172] (0xc000a180b0) (0xc0006048c0) Stream removed, broadcasting: 3\nI0121 01:20:28.271326    4391 log.go:172] (0xc000a180b0) (0xc000425540) Stream removed, broadcasting: 5\n"
Jan 21 01:20:28.276: INFO: stdout: ""
Jan 21 01:20:28.276: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2266 execpodbwpl9 -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.250 32340'
Jan 21 01:20:28.722: INFO: stderr: "I0121 01:20:28.459707    4409 log.go:172] (0xc000a22d10) (0xc0009ec140) Create stream\nI0121 01:20:28.460145    4409 log.go:172] (0xc000a22d10) (0xc0009ec140) Stream added, broadcasting: 1\nI0121 01:20:28.465219    4409 log.go:172] (0xc000a22d10) Reply frame received for 1\nI0121 01:20:28.465330    4409 log.go:172] (0xc000a22d10) (0xc0009d2000) Create stream\nI0121 01:20:28.465355    4409 log.go:172] (0xc000a22d10) (0xc0009d2000) Stream added, broadcasting: 3\nI0121 01:20:28.468621    4409 log.go:172] (0xc000a22d10) Reply frame received for 3\nI0121 01:20:28.468685    4409 log.go:172] (0xc000a22d10) (0xc0009d20a0) Create stream\nI0121 01:20:28.468698    4409 log.go:172] (0xc000a22d10) (0xc0009d20a0) Stream added, broadcasting: 5\nI0121 01:20:28.470884    4409 log.go:172] (0xc000a22d10) Reply frame received for 5\nI0121 01:20:28.588624    4409 log.go:172] (0xc000a22d10) Data frame received for 5\nI0121 01:20:28.588793    4409 log.go:172] (0xc0009d20a0) (5) Data frame handling\nI0121 01:20:28.588823    4409 log.go:172] (0xc0009d20a0) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.2.250 32340\nI0121 01:20:28.590521    4409 log.go:172] (0xc000a22d10) Data frame received for 5\nI0121 01:20:28.590944    4409 log.go:172] (0xc0009d20a0) (5) Data frame handling\nI0121 01:20:28.590986    4409 log.go:172] (0xc0009d20a0) (5) Data frame sent\nConnection to 10.96.2.250 32340 port [tcp/32340] succeeded!\nI0121 01:20:28.690121    4409 log.go:172] (0xc000a22d10) Data frame received for 1\nI0121 01:20:28.690604    4409 log.go:172] (0xc000a22d10) (0xc0009d2000) Stream removed, broadcasting: 3\nI0121 01:20:28.690782    4409 log.go:172] (0xc0009ec140) (1) Data frame handling\nI0121 01:20:28.690914    4409 log.go:172] (0xc000a22d10) (0xc0009d20a0) Stream removed, broadcasting: 5\nI0121 01:20:28.691025    4409 log.go:172] (0xc0009ec140) (1) Data frame sent\nI0121 01:20:28.691086    4409 log.go:172] (0xc000a22d10) (0xc0009ec140) Stream removed, broadcasting: 1\nI0121 01:20:28.691139    4409 log.go:172] (0xc000a22d10) Go away received\nI0121 01:20:28.694335    4409 log.go:172] (0xc000a22d10) (0xc0009ec140) Stream removed, broadcasting: 1\nI0121 01:20:28.694354    4409 log.go:172] (0xc000a22d10) (0xc0009d2000) Stream removed, broadcasting: 3\nI0121 01:20:28.694361    4409 log.go:172] (0xc000a22d10) (0xc0009d20a0) Stream removed, broadcasting: 5\n"
Jan 21 01:20:28.723: INFO: stdout: ""
Jan 21 01:20:28.724: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2266 execpodbwpl9 -- /bin/sh -x -c nc -zv -t -w 2 10.96.1.234 32340'
Jan 21 01:20:29.085: INFO: stderr: "I0121 01:20:28.932594    4430 log.go:172] (0xc000c3abb0) (0xc000c1a280) Create stream\nI0121 01:20:28.932792    4430 log.go:172] (0xc000c3abb0) (0xc000c1a280) Stream added, broadcasting: 1\nI0121 01:20:28.936692    4430 log.go:172] (0xc000c3abb0) Reply frame received for 1\nI0121 01:20:28.936715    4430 log.go:172] (0xc000c3abb0) (0xc000645e00) Create stream\nI0121 01:20:28.936721    4430 log.go:172] (0xc000c3abb0) (0xc000645e00) Stream added, broadcasting: 3\nI0121 01:20:28.937623    4430 log.go:172] (0xc000c3abb0) Reply frame received for 3\nI0121 01:20:28.937644    4430 log.go:172] (0xc000c3abb0) (0xc000b48140) Create stream\nI0121 01:20:28.937653    4430 log.go:172] (0xc000c3abb0) (0xc000b48140) Stream added, broadcasting: 5\nI0121 01:20:28.938925    4430 log.go:172] (0xc000c3abb0) Reply frame received for 5\nI0121 01:20:28.991076    4430 log.go:172] (0xc000c3abb0) Data frame received for 5\nI0121 01:20:28.991128    4430 log.go:172] (0xc000b48140) (5) Data frame handling\nI0121 01:20:28.991154    4430 log.go:172] (0xc000b48140) (5) Data frame sent\n+ nc -zv -t -w 2 10.96.1.234 32340\nI0121 01:20:28.996047    4430 log.go:172] (0xc000c3abb0) Data frame received for 5\nI0121 01:20:28.996076    4430 log.go:172] (0xc000b48140) (5) Data frame handling\nI0121 01:20:28.996091    4430 log.go:172] (0xc000b48140) (5) Data frame sent\nConnection to 10.96.1.234 32340 port [tcp/32340] succeeded!\nI0121 01:20:29.075350    4430 log.go:172] (0xc000c3abb0) Data frame received for 1\nI0121 01:20:29.075640    4430 log.go:172] (0xc000c3abb0) (0xc000645e00) Stream removed, broadcasting: 3\nI0121 01:20:29.075777    4430 log.go:172] (0xc000c1a280) (1) Data frame handling\nI0121 01:20:29.075808    4430 log.go:172] (0xc000c1a280) (1) Data frame sent\nI0121 01:20:29.075815    4430 log.go:172] (0xc000c3abb0) (0xc000c1a280) Stream removed, broadcasting: 1\nI0121 01:20:29.077060    4430 log.go:172] (0xc000c3abb0) (0xc000b48140) Stream removed, broadcasting: 5\nI0121 01:20:29.077141    4430 log.go:172] (0xc000c3abb0) Go away received\nI0121 01:20:29.077173    4430 log.go:172] (0xc000c3abb0) (0xc000c1a280) Stream removed, broadcasting: 1\nI0121 01:20:29.077194    4430 log.go:172] (0xc000c3abb0) (0xc000645e00) Stream removed, broadcasting: 3\nI0121 01:20:29.077202    4430 log.go:172] (0xc000c3abb0) (0xc000b48140) Stream removed, broadcasting: 5\n"
Jan 21 01:20:29.085: INFO: stdout: ""
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:20:29.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2266" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691

• [SLOW TEST:24.017 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to create a functioning NodePort service [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":254,"skipped":4052,"failed":0}
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:20:29.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 21 01:20:45.279: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 21 01:20:45.290: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 21 01:20:47.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 21 01:20:47.298: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 21 01:20:49.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 21 01:20:49.300: INFO: Pod pod-with-poststart-exec-hook still exists
Jan 21 01:20:51.291: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear
Jan 21 01:20:51.307: INFO: Pod pod-with-poststart-exec-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:20:51.307: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7591" for this suite.

• [SLOW TEST:22.218 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":4071,"failed":0}
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:20:51.324: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:73
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 01:20:51.440: INFO: Pod name rollover-pod: Found 0 pods out of 1
Jan 21 01:20:56.476: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Jan 21 01:21:00.561: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Jan 21 01:21:02.569: INFO: Creating deployment "test-rollover-deployment"
Jan 21 01:21:02.636: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Jan 21 01:21:04.658: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Jan 21 01:21:04.672: INFO: Ensure that both replica sets have 1 created replica
Jan 21 01:21:04.680: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Jan 21 01:21:04.697: INFO: Updating deployment test-rollover-deployment
Jan 21 01:21:04.698: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Jan 21 01:21:06.711: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Jan 21 01:21:06.724: INFO: Make sure deployment "test-rollover-deployment" is complete
Jan 21 01:21:06.733: INFO: all replica sets need to contain the pod-template-hash label
Jan 21 01:21:06.733: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166465, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:21:08.753: INFO: all replica sets need to contain the pod-template-hash label
Jan 21 01:21:08.754: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166465, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:21:10.748: INFO: all replica sets need to contain the pod-template-hash label
Jan 21 01:21:10.748: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166465, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:21:12.757: INFO: all replica sets need to contain the pod-template-hash label
Jan 21 01:21:12.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166472, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:21:14.742: INFO: all replica sets need to contain the pod-template-hash label
Jan 21 01:21:14.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166472, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:21:16.752: INFO: all replica sets need to contain the pod-template-hash label
Jan 21 01:21:16.753: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166472, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:21:18.743: INFO: all replica sets need to contain the pod-template-hash label
Jan 21 01:21:18.743: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166472, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:21:20.745: INFO: all replica sets need to contain the pod-template-hash label
Jan 21 01:21:20.745: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166472, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166462, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:21:22.767: INFO: 
Jan 21 01:21:22.768: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:67
Jan 21 01:21:22.776: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-629 /apis/apps/v1/namespaces/deployment-629/deployments/test-rollover-deployment a73e52aa-534e-4341-954f-8cd5acb3c96d 3310616 2 2020-01-21 01:21:02 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005f82148  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-01-21 01:21:02 +0000 UTC,LastTransitionTime:2020-01-21 01:21:02 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-01-21 01:21:22 +0000 UTC,LastTransitionTime:2020-01-21 01:21:02 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Jan 21 01:21:22.779: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff  deployment-629 /apis/apps/v1/namespaces/deployment-629/replicasets/test-rollover-deployment-574d6dfbff 9fbb6cb9-18fa-482d-8b4a-0d6e485b5aad 3310604 2 2020-01-21 01:21:04 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment a73e52aa-534e-4341-954f-8cd5acb3c96d 0xc005f825d7 0xc005f825d8}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] []  []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005f82648  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Jan 21 01:21:22.779: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Jan 21 01:21:22.779: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-629 /apis/apps/v1/namespaces/deployment-629/replicasets/test-rollover-controller 8c88a410-4333-4491-ac74-b689cdb1dbfb 3310613 2 2020-01-21 01:20:51 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment a73e52aa-534e-4341-954f-8cd5acb3c96d 0xc005f82507 0xc005f82508}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005f82568  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 21 01:21:22.779: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-629 /apis/apps/v1/namespaces/deployment-629/replicasets/test-rollover-deployment-f6c94f66c 8a1ddf24-4aa1-4731-8ab2-e76b26a53461 3310551 2 2020-01-21 01:21:02 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment a73e52aa-534e-4341-954f-8cd5acb3c96d 0xc005f826b0 0xc005f826b1}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005f82728  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Jan 21 01:21:22.787: INFO: Pod "test-rollover-deployment-574d6dfbff-bnf4l" is available:
&Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-bnf4l test-rollover-deployment-574d6dfbff- deployment-629 /api/v1/namespaces/deployment-629/pods/test-rollover-deployment-574d6dfbff-bnf4l 5c7c830d-d4a5-49d4-944b-9872ebec659e 3310574 0 2020-01-21 01:21:04 +0000 UTC   map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 9fbb6cb9-18fa-482d-8b4a-0d6e485b5aad 0xc005f82c57 0xc005f82c58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f7tcq,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f7tcq,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f7tcq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:21:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:21:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:21:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-01-21 01:21:04 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.250,PodIP:10.44.0.2,StartTime:2020-01-21 01:21:04 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-01-21 01:21:10 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:docker://db3c1ef4e789bbf993c38900ffc746759f624aff37e8e7648bea6d87bab3e403,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:21:22.787: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-629" for this suite.

• [SLOW TEST:31.476 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":256,"skipped":4086,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:21:22.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 21 01:21:23.025: INFO: Waiting up to 5m0s for pod "downwardapi-volume-987ee996-4686-495e-9c96-ed0fb22f14be" in namespace "projected-6012" to be "success or failure"
Jan 21 01:21:23.134: INFO: Pod "downwardapi-volume-987ee996-4686-495e-9c96-ed0fb22f14be": Phase="Pending", Reason="", readiness=false. Elapsed: 108.551259ms
Jan 21 01:21:25.139: INFO: Pod "downwardapi-volume-987ee996-4686-495e-9c96-ed0fb22f14be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.113958557s
Jan 21 01:21:27.199: INFO: Pod "downwardapi-volume-987ee996-4686-495e-9c96-ed0fb22f14be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.174240481s
Jan 21 01:21:29.207: INFO: Pod "downwardapi-volume-987ee996-4686-495e-9c96-ed0fb22f14be": Phase="Pending", Reason="", readiness=false. Elapsed: 6.18215608s
Jan 21 01:21:31.239: INFO: Pod "downwardapi-volume-987ee996-4686-495e-9c96-ed0fb22f14be": Phase="Pending", Reason="", readiness=false. Elapsed: 8.214439826s
Jan 21 01:21:33.270: INFO: Pod "downwardapi-volume-987ee996-4686-495e-9c96-ed0fb22f14be": Phase="Pending", Reason="", readiness=false. Elapsed: 10.244890854s
Jan 21 01:21:35.278: INFO: Pod "downwardapi-volume-987ee996-4686-495e-9c96-ed0fb22f14be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.253379782s
STEP: Saw pod success
Jan 21 01:21:35.279: INFO: Pod "downwardapi-volume-987ee996-4686-495e-9c96-ed0fb22f14be" satisfied condition "success or failure"
Jan 21 01:21:35.300: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-987ee996-4686-495e-9c96-ed0fb22f14be container client-container: 
STEP: delete the pod
Jan 21 01:21:35.346: INFO: Waiting for pod downwardapi-volume-987ee996-4686-495e-9c96-ed0fb22f14be to disappear
Jan 21 01:21:35.352: INFO: Pod downwardapi-volume-987ee996-4686-495e-9c96-ed0fb22f14be no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:21:35.352: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6012" for this suite.

• [SLOW TEST:12.563 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":257,"skipped":4135,"failed":0}
SS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:21:35.367: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153
[It] should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating the pod
Jan 21 01:21:35.476: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:21:46.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-166" for this suite.

• [SLOW TEST:11.338 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should invoke init containers on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":258,"skipped":4137,"failed":0}
SSSSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:21:46.705: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward api env vars
Jan 21 01:21:46.807: INFO: Waiting up to 5m0s for pod "downward-api-8e1b32c9-80f6-41e1-a8d7-2d6a7e4a5dc7" in namespace "downward-api-3058" to be "success or failure"
Jan 21 01:21:46.814: INFO: Pod "downward-api-8e1b32c9-80f6-41e1-a8d7-2d6a7e4a5dc7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.850334ms
Jan 21 01:21:48.824: INFO: Pod "downward-api-8e1b32c9-80f6-41e1-a8d7-2d6a7e4a5dc7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016774902s
Jan 21 01:21:50.834: INFO: Pod "downward-api-8e1b32c9-80f6-41e1-a8d7-2d6a7e4a5dc7": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026686802s
Jan 21 01:21:52.848: INFO: Pod "downward-api-8e1b32c9-80f6-41e1-a8d7-2d6a7e4a5dc7": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041513246s
Jan 21 01:21:54.858: INFO: Pod "downward-api-8e1b32c9-80f6-41e1-a8d7-2d6a7e4a5dc7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051067971s
STEP: Saw pod success
Jan 21 01:21:54.858: INFO: Pod "downward-api-8e1b32c9-80f6-41e1-a8d7-2d6a7e4a5dc7" satisfied condition "success or failure"
Jan 21 01:21:54.862: INFO: Trying to get logs from node jerma-node pod downward-api-8e1b32c9-80f6-41e1-a8d7-2d6a7e4a5dc7 container dapi-container: 
STEP: delete the pod
Jan 21 01:21:54.932: INFO: Waiting for pod downward-api-8e1b32c9-80f6-41e1-a8d7-2d6a7e4a5dc7 to disappear
Jan 21 01:21:54.939: INFO: Pod downward-api-8e1b32c9-80f6-41e1-a8d7-2d6a7e4a5dc7 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:21:54.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3058" for this suite.

• [SLOW TEST:8.254 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4144,"failed":0}
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:21:54.960: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1789
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 21 01:21:55.219: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-6044'
Jan 21 01:21:55.388: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 21 01:21:55.388: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1794
Jan 21 01:21:55.394: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-6044'
Jan 21 01:21:55.604: INFO: stderr: ""
Jan 21 01:21:55.604: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:21:55.604: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6044" for this suite.
•{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure  [Conformance]","total":278,"completed":260,"skipped":4144,"failed":0}
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:21:55.904: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: set up a multi version CRD
Jan 21 01:21:56.145: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:22:15.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-3079" for this suite.

• [SLOW TEST:19.301 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":261,"skipped":4145,"failed":0}
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:22:15.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating a pod to test downward API volume plugin
Jan 21 01:22:15.381: INFO: Waiting up to 5m0s for pod "downwardapi-volume-bed8c8df-8238-47f6-b786-448cbf0558a6" in namespace "downward-api-7194" to be "success or failure"
Jan 21 01:22:15.418: INFO: Pod "downwardapi-volume-bed8c8df-8238-47f6-b786-448cbf0558a6": Phase="Pending", Reason="", readiness=false. Elapsed: 37.039294ms
Jan 21 01:22:17.426: INFO: Pod "downwardapi-volume-bed8c8df-8238-47f6-b786-448cbf0558a6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045461004s
Jan 21 01:22:19.433: INFO: Pod "downwardapi-volume-bed8c8df-8238-47f6-b786-448cbf0558a6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052123223s
Jan 21 01:22:21.440: INFO: Pod "downwardapi-volume-bed8c8df-8238-47f6-b786-448cbf0558a6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058983106s
Jan 21 01:22:23.445: INFO: Pod "downwardapi-volume-bed8c8df-8238-47f6-b786-448cbf0558a6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.064377664s
STEP: Saw pod success
Jan 21 01:22:23.445: INFO: Pod "downwardapi-volume-bed8c8df-8238-47f6-b786-448cbf0558a6" satisfied condition "success or failure"
Jan 21 01:22:23.449: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-bed8c8df-8238-47f6-b786-448cbf0558a6 container client-container: 
STEP: delete the pod
Jan 21 01:22:23.529: INFO: Waiting for pod downwardapi-volume-bed8c8df-8238-47f6-b786-448cbf0558a6 to disappear
Jan 21 01:22:23.541: INFO: Pod downwardapi-volume-bed8c8df-8238-47f6-b786-448cbf0558a6 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:22:23.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7194" for this suite.

• [SLOW TEST:8.362 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4166,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:22:23.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:22:31.881: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-8124" for this suite.

• [SLOW TEST:8.323 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4221,"failed":0}
SSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:22:31.895: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Jan 21 01:22:48.084: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 21 01:22:48.090: INFO: Pod pod-with-poststart-http-hook still exists
Jan 21 01:22:50.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 21 01:22:50.100: INFO: Pod pod-with-poststart-http-hook still exists
Jan 21 01:22:52.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 21 01:22:52.101: INFO: Pod pod-with-poststart-http-hook still exists
Jan 21 01:22:54.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 21 01:22:54.099: INFO: Pod pod-with-poststart-http-hook still exists
Jan 21 01:22:56.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 21 01:22:56.099: INFO: Pod pod-with-poststart-http-hook still exists
Jan 21 01:22:58.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 21 01:22:58.101: INFO: Pod pod-with-poststart-http-hook still exists
Jan 21 01:23:00.092: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 21 01:23:00.135: INFO: Pod pod-with-poststart-http-hook still exists
Jan 21 01:23:02.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 21 01:23:02.145: INFO: Pod pod-with-poststart-http-hook still exists
Jan 21 01:23:04.091: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Jan 21 01:23:04.096: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:23:04.096: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-751" for this suite.

• [SLOW TEST:32.208 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":264,"skipped":4228,"failed":0}
SSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a read only busybox container 
  should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:23:04.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:23:12.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-244" for this suite.

• [SLOW TEST:8.239 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when scheduling a read only busybox container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187
    should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4246,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:23:12.346: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1898
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 21 01:23:12.532: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1485'
Jan 21 01:23:12.711: INFO: stderr: ""
Jan 21 01:23:12.711: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Jan 21 01:23:22.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1485 -o json'
Jan 21 01:23:22.947: INFO: stderr: ""
Jan 21 01:23:22.948: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2020-01-21T01:23:12Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-1485\",\n        \"resourceVersion\": \"3311169\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-1485/pods/e2e-test-httpd-pod\",\n        \"uid\": \"6e769c36-f4c6-4862-a5b8-3809e9d67f15\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-s887n\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-s887n\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-s887n\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-21T01:23:12Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-21T01:23:19Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-21T01:23:19Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2020-01-21T01:23:12Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://2d0170b77284eeaec181c264416162b4ff0da07bdc44ac8658cb9d98f5f03b92\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2020-01-21T01:23:18Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.250\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.1\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2020-01-21T01:23:12Z\"\n    }\n}\n"
STEP: replace the image in the pod
Jan 21 01:23:22.948: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1485'
Jan 21 01:23:23.450: INFO: stderr: ""
Jan 21 01:23:23.451: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1903
Jan 21 01:23:23.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1485'
Jan 21 01:23:29.400: INFO: stderr: ""
Jan 21 01:23:29.401: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:23:29.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1485" for this suite.

• [SLOW TEST:17.072 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1894
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image  [Conformance]","total":278,"completed":266,"skipped":4280,"failed":0}
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:23:29.418: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod pod-subpath-test-downwardapi-6px2
STEP: Creating a pod to test atomic-volume-subpath
Jan 21 01:23:29.556: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-6px2" in namespace "subpath-2422" to be "success or failure"
Jan 21 01:23:29.563: INFO: Pod "pod-subpath-test-downwardapi-6px2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.411394ms
Jan 21 01:23:31.578: INFO: Pod "pod-subpath-test-downwardapi-6px2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021803803s
Jan 21 01:23:33.588: INFO: Pod "pod-subpath-test-downwardapi-6px2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031460371s
Jan 21 01:23:35.598: INFO: Pod "pod-subpath-test-downwardapi-6px2": Phase="Running", Reason="", readiness=true. Elapsed: 6.041057312s
Jan 21 01:23:37.709: INFO: Pod "pod-subpath-test-downwardapi-6px2": Phase="Running", Reason="", readiness=true. Elapsed: 8.152472825s
Jan 21 01:23:39.720: INFO: Pod "pod-subpath-test-downwardapi-6px2": Phase="Running", Reason="", readiness=true. Elapsed: 10.163030528s
Jan 21 01:23:41.727: INFO: Pod "pod-subpath-test-downwardapi-6px2": Phase="Running", Reason="", readiness=true. Elapsed: 12.17051512s
Jan 21 01:23:43.737: INFO: Pod "pod-subpath-test-downwardapi-6px2": Phase="Running", Reason="", readiness=true. Elapsed: 14.180174014s
Jan 21 01:23:45.746: INFO: Pod "pod-subpath-test-downwardapi-6px2": Phase="Running", Reason="", readiness=true. Elapsed: 16.189551875s
Jan 21 01:23:47.754: INFO: Pod "pod-subpath-test-downwardapi-6px2": Phase="Running", Reason="", readiness=true. Elapsed: 18.197442138s
Jan 21 01:23:49.766: INFO: Pod "pod-subpath-test-downwardapi-6px2": Phase="Running", Reason="", readiness=true. Elapsed: 20.209236281s
Jan 21 01:23:51.817: INFO: Pod "pod-subpath-test-downwardapi-6px2": Phase="Running", Reason="", readiness=true. Elapsed: 22.25987135s
Jan 21 01:23:53.830: INFO: Pod "pod-subpath-test-downwardapi-6px2": Phase="Running", Reason="", readiness=true. Elapsed: 24.273629386s
Jan 21 01:23:55.838: INFO: Pod "pod-subpath-test-downwardapi-6px2": Phase="Running", Reason="", readiness=true. Elapsed: 26.281707281s
Jan 21 01:23:57.855: INFO: Pod "pod-subpath-test-downwardapi-6px2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.298519578s
STEP: Saw pod success
Jan 21 01:23:57.855: INFO: Pod "pod-subpath-test-downwardapi-6px2" satisfied condition "success or failure"
Jan 21 01:23:57.861: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-6px2 container test-container-subpath-downwardapi-6px2: 
STEP: delete the pod
Jan 21 01:23:57.963: INFO: Waiting for pod pod-subpath-test-downwardapi-6px2 to disappear
Jan 21 01:23:57.972: INFO: Pod pod-subpath-test-downwardapi-6px2 no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-6px2
Jan 21 01:23:57.972: INFO: Deleting pod "pod-subpath-test-downwardapi-6px2" in namespace "subpath-2422"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:23:57.977: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-2422" for this suite.

• [SLOW TEST:28.643 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":267,"skipped":4280,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:23:58.063: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating secret with name secret-test-4cd99569-ec4f-40d3-9d07-b3d3e05ac2d8
STEP: Creating a pod to test consume secrets
Jan 21 01:23:58.207: INFO: Waiting up to 5m0s for pod "pod-secrets-0be13633-ca31-42a1-9c85-0fce2f97d9c6" in namespace "secrets-1446" to be "success or failure"
Jan 21 01:23:58.219: INFO: Pod "pod-secrets-0be13633-ca31-42a1-9c85-0fce2f97d9c6": Phase="Pending", Reason="", readiness=false. Elapsed: 11.714728ms
Jan 21 01:24:00.265: INFO: Pod "pod-secrets-0be13633-ca31-42a1-9c85-0fce2f97d9c6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.058075444s
Jan 21 01:24:02.273: INFO: Pod "pod-secrets-0be13633-ca31-42a1-9c85-0fce2f97d9c6": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065518655s
Jan 21 01:24:04.284: INFO: Pod "pod-secrets-0be13633-ca31-42a1-9c85-0fce2f97d9c6": Phase="Pending", Reason="", readiness=false. Elapsed: 6.077066923s
Jan 21 01:24:06.294: INFO: Pod "pod-secrets-0be13633-ca31-42a1-9c85-0fce2f97d9c6": Phase="Pending", Reason="", readiness=false. Elapsed: 8.087286904s
Jan 21 01:24:08.305: INFO: Pod "pod-secrets-0be13633-ca31-42a1-9c85-0fce2f97d9c6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.098029095s
STEP: Saw pod success
Jan 21 01:24:08.305: INFO: Pod "pod-secrets-0be13633-ca31-42a1-9c85-0fce2f97d9c6" satisfied condition "success or failure"
Jan 21 01:24:08.313: INFO: Trying to get logs from node jerma-node pod pod-secrets-0be13633-ca31-42a1-9c85-0fce2f97d9c6 container secret-volume-test: 
STEP: delete the pod
Jan 21 01:24:08.515: INFO: Waiting for pod pod-secrets-0be13633-ca31-42a1-9c85-0fce2f97d9c6 to disappear
Jan 21 01:24:08.524: INFO: Pod pod-secrets-0be13633-ca31-42a1-9c85-0fce2f97d9c6 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:24:08.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1446" for this suite.

• [SLOW TEST:10.591 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4324,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:24:08.660: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6997
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-6997
STEP: creating replication controller externalsvc in namespace services-6997
I0121 01:24:08.948755       8 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6997, replica count: 2
I0121 01:24:12.000694       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:24:15.002578       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:24:18.003451       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I0121 01:24:21.004009       8 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Jan 21 01:24:21.060: INFO: Creating new exec pod
Jan 21 01:24:29.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6997 execpodmzcm5 -- /bin/sh -x -c nslookup clusterip-service'
Jan 21 01:24:29.581: INFO: stderr: "I0121 01:24:29.416464    4571 log.go:172] (0xc0009108f0) (0xc00065df40) Create stream\nI0121 01:24:29.416718    4571 log.go:172] (0xc0009108f0) (0xc00065df40) Stream added, broadcasting: 1\nI0121 01:24:29.421469    4571 log.go:172] (0xc0009108f0) Reply frame received for 1\nI0121 01:24:29.421579    4571 log.go:172] (0xc0009108f0) (0xc000636820) Create stream\nI0121 01:24:29.421596    4571 log.go:172] (0xc0009108f0) (0xc000636820) Stream added, broadcasting: 3\nI0121 01:24:29.423967    4571 log.go:172] (0xc0009108f0) Reply frame received for 3\nI0121 01:24:29.424037    4571 log.go:172] (0xc0009108f0) (0xc0002f94a0) Create stream\nI0121 01:24:29.424059    4571 log.go:172] (0xc0009108f0) (0xc0002f94a0) Stream added, broadcasting: 5\nI0121 01:24:29.425392    4571 log.go:172] (0xc0009108f0) Reply frame received for 5\nI0121 01:24:29.485242    4571 log.go:172] (0xc0009108f0) Data frame received for 5\nI0121 01:24:29.485825    4571 log.go:172] (0xc0002f94a0) (5) Data frame handling\nI0121 01:24:29.485886    4571 log.go:172] (0xc0002f94a0) (5) Data frame sent\n+ nslookup clusterip-service\nI0121 01:24:29.504196    4571 log.go:172] (0xc0009108f0) Data frame received for 3\nI0121 01:24:29.504249    4571 log.go:172] (0xc000636820) (3) Data frame handling\nI0121 01:24:29.504275    4571 log.go:172] (0xc000636820) (3) Data frame sent\nI0121 01:24:29.505101    4571 log.go:172] (0xc0009108f0) Data frame received for 3\nI0121 01:24:29.505161    4571 log.go:172] (0xc000636820) (3) Data frame handling\nI0121 01:24:29.505180    4571 log.go:172] (0xc000636820) (3) Data frame sent\nI0121 01:24:29.572010    4571 log.go:172] (0xc0009108f0) (0xc0002f94a0) Stream removed, broadcasting: 5\nI0121 01:24:29.572120    4571 log.go:172] (0xc0009108f0) Data frame received for 1\nI0121 01:24:29.572154    4571 log.go:172] (0xc0009108f0) (0xc000636820) Stream removed, broadcasting: 3\nI0121 01:24:29.572204    4571 log.go:172] (0xc00065df40) (1) Data frame handling\nI0121 01:24:29.572221    4571 log.go:172] (0xc00065df40) (1) Data frame sent\nI0121 01:24:29.572238    4571 log.go:172] (0xc0009108f0) (0xc00065df40) Stream removed, broadcasting: 1\nI0121 01:24:29.572254    4571 log.go:172] (0xc0009108f0) Go away received\nI0121 01:24:29.572939    4571 log.go:172] (0xc0009108f0) (0xc00065df40) Stream removed, broadcasting: 1\nI0121 01:24:29.572950    4571 log.go:172] (0xc0009108f0) (0xc000636820) Stream removed, broadcasting: 3\nI0121 01:24:29.572956    4571 log.go:172] (0xc0009108f0) (0xc0002f94a0) Stream removed, broadcasting: 5\n"
Jan 21 01:24:29.582: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6997.svc.cluster.local\tcanonical name = externalsvc.services-6997.svc.cluster.local.\nName:\texternalsvc.services-6997.svc.cluster.local\nAddress: 10.96.172.122\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-6997, will wait for the garbage collector to delete the pods
Jan 21 01:24:29.643: INFO: Deleting ReplicationController externalsvc took: 6.524586ms
Jan 21 01:24:30.044: INFO: Terminating ReplicationController externalsvc pods took: 400.576135ms
Jan 21 01:24:43.210: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:24:43.249: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6997" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691

• [SLOW TEST:34.606 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":269,"skipped":4383,"failed":0}
SSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:24:43.267: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W0121 01:25:13.485086       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Jan 21 01:25:13.485: INFO: For apiserver_request_total:
For apiserver_request_latency_seconds:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:25:13.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8594" for this suite.

• [SLOW TEST:30.238 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":270,"skipped":4388,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:25:13.508: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Jan 21 01:25:31.814: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 21 01:25:31.822: INFO: Pod pod-with-prestop-http-hook still exists
Jan 21 01:25:33.823: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 21 01:25:33.832: INFO: Pod pod-with-prestop-http-hook still exists
Jan 21 01:25:35.823: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Jan 21 01:25:35.831: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:25:35.854: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-958" for this suite.

• [SLOW TEST:22.368 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":271,"skipped":4405,"failed":0}
SSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:25:35.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Jan 21 01:25:44.112: INFO: Expected: &{} to match Container's Termination Message:  --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:25:44.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-2839" for this suite.

• [SLOW TEST:8.275 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:131
      should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4408,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:25:44.155: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:86
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Jan 21 01:25:44.917: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Jan 21 01:25:46.983: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166744, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166744, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166745, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166744, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:25:48.992: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166744, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166744, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166745, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166744, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
Jan 21 01:25:50.991: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166744, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166744, loc:(*time.Location)(0x7d7cf00)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166745, loc:(*time.Location)(0x7d7cf00)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63715166744, loc:(*time.Location)(0x7d7cf00)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Jan 21 01:25:54.769: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
Jan 21 01:25:54.777: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-4650-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:25:56.219: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6865" for this suite.
STEP: Destroying namespace "webhook-6865-markers" for this suite.
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:101

• [SLOW TEST:12.209 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":273,"skipped":4495,"failed":0}
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:25:56.365: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:687
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: creating service multi-endpoint-test in namespace services-8524
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8524 to expose endpoints map[]
Jan 21 01:25:56.537: INFO: successfully validated that service multi-endpoint-test in namespace services-8524 exposes endpoints map[] (34.191799ms elapsed)
STEP: Creating pod pod1 in namespace services-8524
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8524 to expose endpoints map[pod1:[100]]
Jan 21 01:26:00.648: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.096125355s elapsed, will retry)
Jan 21 01:26:05.711: INFO: successfully validated that service multi-endpoint-test in namespace services-8524 exposes endpoints map[pod1:[100]] (9.159177106s elapsed)
STEP: Creating pod pod2 in namespace services-8524
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8524 to expose endpoints map[pod1:[100] pod2:[101]]
Jan 21 01:26:11.060: INFO: Unexpected endpoints: found map[ae8036e5-cf0b-4f2f-a255-5b7bacc38942:[100]], expected map[pod1:[100] pod2:[101]] (5.337599765s elapsed, will retry)
Jan 21 01:26:14.244: INFO: successfully validated that service multi-endpoint-test in namespace services-8524 exposes endpoints map[pod1:[100] pod2:[101]] (8.521079465s elapsed)
STEP: Deleting pod pod1 in namespace services-8524
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8524 to expose endpoints map[pod2:[101]]
Jan 21 01:26:15.530: INFO: successfully validated that service multi-endpoint-test in namespace services-8524 exposes endpoints map[pod2:[101]] (1.277624976s elapsed)
STEP: Deleting pod pod2 in namespace services-8524
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-8524 to expose endpoints map[]
Jan 21 01:26:16.671: INFO: successfully validated that service multi-endpoint-test in namespace services-8524 exposes endpoints map[] (1.118161792s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:26:17.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-8524" for this suite.
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:691

• [SLOW TEST:21.547 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods  [Conformance]","total":278,"completed":274,"skipped":4495,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:26:17.913: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating pod liveness-97a895c5-eada-48aa-8fe5-7b6872e6173d in namespace container-probe-2071
Jan 21 01:26:26.069: INFO: Started pod liveness-97a895c5-eada-48aa-8fe5-7b6872e6173d in namespace container-probe-2071
STEP: checking the pod's current state and verifying that restartCount is present
Jan 21 01:26:26.074: INFO: Initial restart count of pod liveness-97a895c5-eada-48aa-8fe5-7b6872e6173d is 0
Jan 21 01:26:50.190: INFO: Restart count of pod container-probe-2071/liveness-97a895c5-eada-48aa-8fe5-7b6872e6173d is now 1 (24.116200577s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:26:50.227: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-2071" for this suite.

• [SLOW TEST:32.396 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:680
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4512,"failed":0}
S
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:26:50.311: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Performing setup for networking test in namespace pod-network-test-9298
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Jan 21 01:26:50.476: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Jan 21 01:27:22.728: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.1:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9298 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 01:27:22.728: INFO: >>> kubeConfig: /root/.kube/config
I0121 01:27:22.804721       8 log.go:172] (0xc0046bc420) (0xc001523ae0) Create stream
I0121 01:27:22.805320       8 log.go:172] (0xc0046bc420) (0xc001523ae0) Stream added, broadcasting: 1
I0121 01:27:22.831507       8 log.go:172] (0xc0046bc420) Reply frame received for 1
I0121 01:27:22.831697       8 log.go:172] (0xc0046bc420) (0xc002348500) Create stream
I0121 01:27:22.831718       8 log.go:172] (0xc0046bc420) (0xc002348500) Stream added, broadcasting: 3
I0121 01:27:22.834360       8 log.go:172] (0xc0046bc420) Reply frame received for 3
I0121 01:27:22.834398       8 log.go:172] (0xc0046bc420) (0xc001523d60) Create stream
I0121 01:27:22.834434       8 log.go:172] (0xc0046bc420) (0xc001523d60) Stream added, broadcasting: 5
I0121 01:27:22.836675       8 log.go:172] (0xc0046bc420) Reply frame received for 5
I0121 01:27:22.959491       8 log.go:172] (0xc0046bc420) Data frame received for 3
I0121 01:27:22.959840       8 log.go:172] (0xc002348500) (3) Data frame handling
I0121 01:27:22.959916       8 log.go:172] (0xc002348500) (3) Data frame sent
I0121 01:27:23.051246       8 log.go:172] (0xc0046bc420) Data frame received for 1
I0121 01:27:23.051747       8 log.go:172] (0xc0046bc420) (0xc002348500) Stream removed, broadcasting: 3
I0121 01:27:23.052144       8 log.go:172] (0xc001523ae0) (1) Data frame handling
I0121 01:27:23.052358       8 log.go:172] (0xc001523ae0) (1) Data frame sent
I0121 01:27:23.052452       8 log.go:172] (0xc0046bc420) (0xc001523d60) Stream removed, broadcasting: 5
I0121 01:27:23.052529       8 log.go:172] (0xc0046bc420) (0xc001523ae0) Stream removed, broadcasting: 1
I0121 01:27:23.052614       8 log.go:172] (0xc0046bc420) Go away received
I0121 01:27:23.053408       8 log.go:172] (0xc0046bc420) (0xc001523ae0) Stream removed, broadcasting: 1
I0121 01:27:23.053456       8 log.go:172] (0xc0046bc420) (0xc002348500) Stream removed, broadcasting: 3
I0121 01:27:23.053468       8 log.go:172] (0xc0046bc420) (0xc001523d60) Stream removed, broadcasting: 5
Jan 21 01:27:23.053: INFO: Found all expected endpoints: [netserver-0]
Jan 21 01:27:23.064: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-9298 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Jan 21 01:27:23.065: INFO: >>> kubeConfig: /root/.kube/config
I0121 01:27:23.109272       8 log.go:172] (0xc0026ebb80) (0xc00010df40) Create stream
I0121 01:27:23.109399       8 log.go:172] (0xc0026ebb80) (0xc00010df40) Stream added, broadcasting: 1
I0121 01:27:23.114629       8 log.go:172] (0xc0026ebb80) Reply frame received for 1
I0121 01:27:23.114724       8 log.go:172] (0xc0026ebb80) (0xc0002d7040) Create stream
I0121 01:27:23.114737       8 log.go:172] (0xc0026ebb80) (0xc0002d7040) Stream added, broadcasting: 3
I0121 01:27:23.115999       8 log.go:172] (0xc0026ebb80) Reply frame received for 3
I0121 01:27:23.116024       8 log.go:172] (0xc0026ebb80) (0xc000f3a320) Create stream
I0121 01:27:23.116036       8 log.go:172] (0xc0026ebb80) (0xc000f3a320) Stream added, broadcasting: 5
I0121 01:27:23.117610       8 log.go:172] (0xc0026ebb80) Reply frame received for 5
I0121 01:27:23.199823       8 log.go:172] (0xc0026ebb80) Data frame received for 3
I0121 01:27:23.199895       8 log.go:172] (0xc0002d7040) (3) Data frame handling
I0121 01:27:23.200092       8 log.go:172] (0xc0002d7040) (3) Data frame sent
I0121 01:27:23.264008       8 log.go:172] (0xc0026ebb80) Data frame received for 1
I0121 01:27:23.264276       8 log.go:172] (0xc0026ebb80) (0xc000f3a320) Stream removed, broadcasting: 5
I0121 01:27:23.264397       8 log.go:172] (0xc00010df40) (1) Data frame handling
I0121 01:27:23.264862       8 log.go:172] (0xc00010df40) (1) Data frame sent
I0121 01:27:23.265017       8 log.go:172] (0xc0026ebb80) (0xc0002d7040) Stream removed, broadcasting: 3
I0121 01:27:23.265123       8 log.go:172] (0xc0026ebb80) (0xc00010df40) Stream removed, broadcasting: 1
I0121 01:27:23.265202       8 log.go:172] (0xc0026ebb80) Go away received
I0121 01:27:23.265423       8 log.go:172] (0xc0026ebb80) (0xc00010df40) Stream removed, broadcasting: 1
I0121 01:27:23.265443       8 log.go:172] (0xc0026ebb80) (0xc0002d7040) Stream removed, broadcasting: 3
I0121 01:27:23.265459       8 log.go:172] (0xc0026ebb80) (0xc000f3a320) Stream removed, broadcasting: 5
Jan 21 01:27:23.265: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:27:23.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-9298" for this suite.

• [SLOW TEST:32.968 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4513,"failed":0}
SSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:27:23.279: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: Creating projection with secret that has name projected-secret-test-f6d00a32-aa93-480b-8b27-f3754df82401
STEP: Creating a pod to test consume secrets
Jan 21 01:27:23.364: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-21d2893c-8696-4488-831c-411591a24d18" in namespace "projected-62" to be "success or failure"
Jan 21 01:27:23.368: INFO: Pod "pod-projected-secrets-21d2893c-8696-4488-831c-411591a24d18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.081278ms
Jan 21 01:27:25.379: INFO: Pod "pod-projected-secrets-21d2893c-8696-4488-831c-411591a24d18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014921679s
Jan 21 01:27:27.387: INFO: Pod "pod-projected-secrets-21d2893c-8696-4488-831c-411591a24d18": Phase="Pending", Reason="", readiness=false. Elapsed: 4.022866687s
Jan 21 01:27:30.113: INFO: Pod "pod-projected-secrets-21d2893c-8696-4488-831c-411591a24d18": Phase="Pending", Reason="", readiness=false. Elapsed: 6.748849916s
Jan 21 01:27:32.130: INFO: Pod "pod-projected-secrets-21d2893c-8696-4488-831c-411591a24d18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.765644932s
STEP: Saw pod success
Jan 21 01:27:32.130: INFO: Pod "pod-projected-secrets-21d2893c-8696-4488-831c-411591a24d18" satisfied condition "success or failure"
Jan 21 01:27:32.138: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-21d2893c-8696-4488-831c-411591a24d18 container projected-secret-volume-test: 
STEP: delete the pod
Jan 21 01:27:32.384: INFO: Waiting for pod pod-projected-secrets-21d2893c-8696-4488-831c-411591a24d18 to disappear
Jan 21 01:27:32.544: INFO: Pod pod-projected-secrets-21d2893c-8696-4488-831c-411591a24d18 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:27:32.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-62" for this suite.

• [SLOW TEST:9.286 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4516,"failed":0}
SSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:174
STEP: Creating a kubernetes client
Jan 21 01:27:32.567: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:279
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1633
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Jan 21 01:27:34.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-6865'
Jan 21 01:27:37.689: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Jan 21 01:27:37.690: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Jan 21 01:27:37.771: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-czgm5]
Jan 21 01:27:37.771: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-czgm5" in namespace "kubectl-6865" to be "running and ready"
Jan 21 01:27:37.779: INFO: Pod "e2e-test-httpd-rc-czgm5": Phase="Pending", Reason="", readiness=false. Elapsed: 7.213382ms
Jan 21 01:27:39.789: INFO: Pod "e2e-test-httpd-rc-czgm5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017317438s
Jan 21 01:27:41.800: INFO: Pod "e2e-test-httpd-rc-czgm5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.028941621s
Jan 21 01:27:43.816: INFO: Pod "e2e-test-httpd-rc-czgm5": Phase="Pending", Reason="", readiness=false. Elapsed: 6.044401882s
Jan 21 01:27:45.826: INFO: Pod "e2e-test-httpd-rc-czgm5": Phase="Running", Reason="", readiness=true. Elapsed: 8.054147745s
Jan 21 01:27:45.826: INFO: Pod "e2e-test-httpd-rc-czgm5" satisfied condition "running and ready"
Jan 21 01:27:45.826: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-czgm5]
Jan 21 01:27:45.826: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-6865'
Jan 21 01:27:46.000: INFO: stderr: ""
Jan 21 01:27:46.000: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Tue Jan 21 01:27:43.174828 2020] [mpm_event:notice] [pid 1:tid 140554464185192] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue Jan 21 01:27:43.174887 2020] [core:notice] [pid 1:tid 140554464185192] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1638
Jan 21 01:27:46.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-6865'
Jan 21 01:27:46.145: INFO: stderr: ""
Jan 21 01:27:46.146: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:175
Jan 21 01:27:46.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6865" for this suite.

• [SLOW TEST:13.661 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1629
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:685
------------------------------
{"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image  [Conformance]","total":278,"completed":278,"skipped":4533,"failed":0}
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSJan 21 01:27:46.229: INFO: Running AfterSuite actions on all nodes
Jan 21 01:27:46.230: INFO: Running AfterSuite actions on node 1
Jan 21 01:27:46.230: INFO: Skipping dumping logs from cluster

JUnit report was created: /home/opnfv/functest/results/k8s_conformance/junit_01.xml
{"msg":"Test Suite completed","total":278,"completed":278,"skipped":4563,"failed":0}

Ran 278 of 4841 Specs in 6521.931 seconds
SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4563 Skipped
PASS