I1217 21:09:50.849909 8 e2e.go:92] Starting e2e run "71e6d18d-bb54-4e41-b520-5b2a34a6d31b" on Ginkgo node 1 Running Suite: Kubernetes e2e suite =================================== Random Seed: 1576616989 - Will randomize all specs Will run 276 of 4897 specs Dec 17 21:09:50.964: INFO: >>> kubeConfig: /root/.kube/config Dec 17 21:09:50.969: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable Dec 17 21:09:50.993: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready Dec 17 21:09:51.029: INFO: 10 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) Dec 17 21:09:51.029: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. Dec 17 21:09:51.029: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start Dec 17 21:09:51.041: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) Dec 17 21:09:51.041: INFO: 2 / 2 pods ready in namespace 'kube-system' in daemonset 'weave-net' (0 seconds elapsed) Dec 17 21:09:51.041: INFO: e2e test version: v1.16.1 Dec 17 21:09:51.043: INFO: kube-apiserver version: v1.16.1 Dec 17 21:09:51.043: INFO: >>> kubeConfig: /root/.kube/config Dec 17 21:09:51.049: INFO: Cluster IP family: ipv4 SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:09:51.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset Dec 17 21:09:51.127: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77 STEP: Creating service test in namespace statefulset-5652 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a new StatefulSet Dec 17 21:09:51.253: INFO: Found 0 stateful pods, waiting for 3 Dec 17 21:10:01.342: INFO: Found 2 stateful pods, waiting for 3 Dec 17 21:10:11.262: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 17 21:10:11.263: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 17 21:10:11.263: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 17 21:10:21.265: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 17 21:10:21.265: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 17 21:10:21.265: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine Dec 17 21:10:21.306: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update Dec 17 21:10:31.406: INFO: Updating stateful set ss2 Dec 17 21:10:31.528: INFO: Waiting for Pod statefulset-5652/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted Dec 17 21:10:41.844: INFO: Found 2 stateful pods, waiting for 3 Dec 17 21:10:51.865: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 17 21:10:51.865: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 17 21:10:51.865: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false Dec 17 21:11:01.857: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true Dec 17 21:11:01.857: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true Dec 17 21:11:01.857: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update Dec 17 21:11:01.889: INFO: Updating stateful set ss2 Dec 17 21:11:01.901: INFO: Waiting for Pod statefulset-5652/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Dec 17 21:11:12.058: INFO: Updating stateful set ss2 Dec 17 21:11:12.097: INFO: Waiting for StatefulSet statefulset-5652/ss2 to complete update Dec 17 21:11:12.097: INFO: Waiting for Pod statefulset-5652/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Dec 17 21:11:22.114: INFO: Waiting for StatefulSet statefulset-5652/ss2 to complete update Dec 17 21:11:22.114: INFO: Waiting for Pod statefulset-5652/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 Dec 17 21:11:32.116: INFO: Waiting for StatefulSet statefulset-5652/ss2 to complete update [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88 Dec 17 21:11:42.121: INFO: Deleting all statefulset in ns statefulset-5652 Dec 17 21:11:42.126: INFO: Scaling statefulset ss2 to 0 Dec 17 21:12:22.168: INFO: Waiting for statefulset status.replicas updated to 0 Dec 17 21:12:22.186: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:12:22.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-5652" for this suite. Dec 17 21:12:30.343: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:12:30.582: INFO: namespace statefulset-5652 deletion completed in 8.304839347s • [SLOW TEST:159.533 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:12:30.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1274 STEP: creating an pod Dec 17 21:12:30.671: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.6 --namespace=kubectl-2061 -- logs-generator --log-lines-total 100 --run-duration 20s' Dec 17 21:12:32.930: INFO: stderr: "" Dec 17 21:12:32.930: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Waiting for log generator to start. Dec 17 21:12:32.930: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] Dec 17 21:12:32.931: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-2061" to be "running and ready, or succeeded" Dec 17 21:12:33.079: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 147.786842ms Dec 17 21:12:35.086: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.155520425s Dec 17 21:12:37.093: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 4.161782474s Dec 17 21:12:39.098: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 6.167187451s Dec 17 21:12:41.104: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 8.173031829s Dec 17 21:12:41.104: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" Dec 17 21:12:41.104: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings Dec 17 21:12:41.104: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2061' Dec 17 21:12:41.358: INFO: stderr: "" Dec 17 21:12:41.359: INFO: stdout: "I1217 21:12:39.243068 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/gg5c 468\nI1217 21:12:39.443319 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/crf 257\nI1217 21:12:39.643677 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/pqxw 330\nI1217 21:12:39.843462 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/2ms 573\nI1217 21:12:40.043769 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/gw8r 474\nI1217 21:12:40.243638 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/9vzc 262\nI1217 21:12:40.443377 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/7lm 492\nI1217 21:12:40.643262 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/tfxc 363\nI1217 21:12:40.843432 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/s8fw 422\nI1217 21:12:41.043281 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/fzmd 274\nI1217 21:12:41.243431 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/hf2q 520\n" STEP: limiting log lines Dec 17 21:12:41.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2061 --tail=1' Dec 17 21:12:41.471: INFO: stderr: "" Dec 17 21:12:41.471: INFO: stdout: "I1217 21:12:41.443473 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/xcvr 352\n" STEP: limiting log bytes Dec 17 21:12:41.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2061 --limit-bytes=1' Dec 17 21:12:41.566: INFO: stderr: "" Dec 17 21:12:41.566: INFO: stdout: "I" STEP: exposing timestamps Dec 17 21:12:41.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2061 --tail=1 --timestamps' Dec 17 21:12:41.697: INFO: stderr: "" Dec 17 21:12:41.697: INFO: stdout: "2019-12-17T21:12:41.644547848Z I1217 21:12:41.643340 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/rjk 536\n" STEP: restricting to a time range Dec 17 21:12:44.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2061 --since=1s' Dec 17 21:12:44.356: INFO: stderr: "" Dec 17 21:12:44.356: INFO: stdout: "I1217 21:12:43.443480 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/rzw 488\nI1217 21:12:43.643493 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/kr4 527\nI1217 21:12:43.843375 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/d2nd 596\nI1217 21:12:44.043232 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/vvc9 442\nI1217 21:12:44.243334 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/wgh 272\n" Dec 17 21:12:44.357: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-2061 --since=24h' Dec 17 21:12:44.499: INFO: stderr: "" Dec 17 21:12:44.500: INFO: stdout: "I1217 21:12:39.243068 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/gg5c 468\nI1217 21:12:39.443319 1 logs_generator.go:76] 1 POST /api/v1/namespaces/ns/pods/crf 257\nI1217 21:12:39.643677 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/kube-system/pods/pqxw 330\nI1217 21:12:39.843462 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/ns/pods/2ms 573\nI1217 21:12:40.043769 1 logs_generator.go:76] 4 POST /api/v1/namespaces/ns/pods/gw8r 474\nI1217 21:12:40.243638 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/kube-system/pods/9vzc 262\nI1217 21:12:40.443377 1 logs_generator.go:76] 6 POST /api/v1/namespaces/ns/pods/7lm 492\nI1217 21:12:40.643262 1 logs_generator.go:76] 7 POST /api/v1/namespaces/ns/pods/tfxc 363\nI1217 21:12:40.843432 1 logs_generator.go:76] 8 PUT /api/v1/namespaces/kube-system/pods/s8fw 422\nI1217 21:12:41.043281 1 logs_generator.go:76] 9 GET /api/v1/namespaces/default/pods/fzmd 274\nI1217 21:12:41.243431 1 logs_generator.go:76] 10 GET /api/v1/namespaces/ns/pods/hf2q 520\nI1217 21:12:41.443473 1 logs_generator.go:76] 11 GET /api/v1/namespaces/default/pods/xcvr 352\nI1217 21:12:41.643340 1 logs_generator.go:76] 12 GET /api/v1/namespaces/ns/pods/rjk 536\nI1217 21:12:41.843264 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/gt4m 519\nI1217 21:12:42.043281 1 logs_generator.go:76] 14 POST /api/v1/namespaces/default/pods/lrv 499\nI1217 21:12:42.243305 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/default/pods/9x5 567\nI1217 21:12:42.443350 1 logs_generator.go:76] 16 POST /api/v1/namespaces/default/pods/2tx 596\nI1217 21:12:42.643286 1 logs_generator.go:76] 17 GET /api/v1/namespaces/ns/pods/bzj 394\nI1217 21:12:42.843310 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/default/pods/npc 292\nI1217 21:12:43.043279 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/default/pods/z79r 203\nI1217 21:12:43.243329 1 logs_generator.go:76] 20 GET /api/v1/namespaces/default/pods/d6j 403\nI1217 21:12:43.443480 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/rzw 488\nI1217 21:12:43.643493 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/kr4 527\nI1217 21:12:43.843375 1 logs_generator.go:76] 23 GET /api/v1/namespaces/kube-system/pods/d2nd 596\nI1217 21:12:44.043232 1 logs_generator.go:76] 24 PUT /api/v1/namespaces/ns/pods/vvc9 442\nI1217 21:12:44.243334 1 logs_generator.go:76] 25 POST /api/v1/namespaces/kube-system/pods/wgh 272\nI1217 21:12:44.443184 1 logs_generator.go:76] 26 POST /api/v1/namespaces/ns/pods/tcz 465\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1280 Dec 17 21:12:44.501: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-2061' Dec 17 21:12:56.652: INFO: stderr: "" Dec 17 21:12:56.652: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:12:56.653: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2061" for this suite. Dec 17 21:13:02.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:13:02.851: INFO: namespace kubectl-2061 deletion completed in 6.185608534s • [SLOW TEST:32.269 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1270 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:13:02.854: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 17 21:13:03.460: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 17 21:13:05.481: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712213983, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712213983, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712213983, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712213983, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 21:13:07.515: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712213983, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712213983, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712213983, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712213983, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 21:13:09.492: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712213983, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712213983, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712213983, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712213983, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 17 21:13:12.559: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 17 21:13:12.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:13:13.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7263" for this suite. Dec 17 21:13:19.710: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:13:19.823: INFO: namespace webhook-7263 deletion completed in 6.21167689s STEP: Destroying namespace "webhook-7263-markers" for this suite. Dec 17 21:13:25.858: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:13:25.996: INFO: namespace webhook-7263-markers deletion completed in 6.173274735s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:23.160 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:13:26.018: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1317 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1317 I1217 21:13:26.404852 8 runners.go:184] Created replication controller with name: externalname-service, namespace: services-1317, replica count: 2 I1217 21:13:29.456566 8 runners.go:184] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 21:13:32.457347 8 runners.go:184] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 21:13:35.458138 8 runners.go:184] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 21:13:38.459252 8 runners.go:184] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 17 21:13:38.459: INFO: Creating new exec pod Dec 17 21:13:47.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1317 execpodqm7h5 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' Dec 17 21:13:48.251: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n" Dec 17 21:13:48.251: INFO: stdout: "" Dec 17 21:13:48.253: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1317 execpodqm7h5 -- /bin/sh -x -c nc -zv -t -w 2 10.97.14.156 80' Dec 17 21:13:48.611: INFO: stderr: "+ nc -zv -t -w 2 10.97.14.156 80\nConnection to 10.97.14.156 80 port [tcp/http] succeeded!\n" Dec 17 21:13:48.612: INFO: stdout: "" Dec 17 21:13:48.612: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:13:48.727: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1317" for this suite. Dec 17 21:13:56.761: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:13:56.846: INFO: namespace services-1317 deletion completed in 8.111366159s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:30.829 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:13:56.846: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a pod to test emptydir volume type on tmpfs Dec 17 21:13:58.408: INFO: Waiting up to 5m0s for pod "pod-a7613003-cb12-463d-b8c9-47b1ad9f14b8" in namespace "emptydir-1424" to be "success or failure" Dec 17 21:13:58.419: INFO: Pod "pod-a7613003-cb12-463d-b8c9-47b1ad9f14b8": Phase="Pending", Reason="", readiness=false. Elapsed: 10.671952ms Dec 17 21:14:00.559: INFO: Pod "pod-a7613003-cb12-463d-b8c9-47b1ad9f14b8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.150914393s Dec 17 21:14:02.576: INFO: Pod "pod-a7613003-cb12-463d-b8c9-47b1ad9f14b8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.167737018s Dec 17 21:14:04.596: INFO: Pod "pod-a7613003-cb12-463d-b8c9-47b1ad9f14b8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.187927291s Dec 17 21:14:06.603: INFO: Pod "pod-a7613003-cb12-463d-b8c9-47b1ad9f14b8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.194771716s Dec 17 21:14:08.622: INFO: Pod "pod-a7613003-cb12-463d-b8c9-47b1ad9f14b8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.213681053s STEP: Saw pod success Dec 17 21:14:08.622: INFO: Pod "pod-a7613003-cb12-463d-b8c9-47b1ad9f14b8" satisfied condition "success or failure" Dec 17 21:14:08.635: INFO: Trying to get logs from node jerma-node pod pod-a7613003-cb12-463d-b8c9-47b1ad9f14b8 container test-container: STEP: delete the pod Dec 17 21:14:08.858: INFO: Waiting for pod pod-a7613003-cb12-463d-b8c9-47b1ad9f14b8 to disappear Dec 17 21:14:08.889: INFO: Pod pod-a7613003-cb12-463d-b8c9-47b1ad9f14b8 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:14:08.889: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1424" for this suite. Dec 17 21:14:14.925: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:14:15.038: INFO: namespace emptydir-1424 deletion completed in 6.133394185s • [SLOW TEST:18.192 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41 volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:14:15.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 17 21:14:16.075: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 17 21:14:18.095: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214056, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214056, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214056, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214056, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 21:14:20.110: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214056, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214056, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214056, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214056, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 21:14:22.107: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214056, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214056, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214056, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214056, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 21:14:25.189: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214056, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214056, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214056, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214056, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 17 21:14:27.387: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:14:27.601: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7624" for this suite. Dec 17 21:14:35.652: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:14:35.794: INFO: namespace webhook-7624 deletion completed in 8.183627957s STEP: Destroying namespace "webhook-7624-markers" for this suite. Dec 17 21:14:41.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:14:41.999: INFO: namespace webhook-7624-markers deletion completed in 6.205342474s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:26.981 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] removing taint cancels eviction [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:14:42.022: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename taint-single-pod STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:164 Dec 17 21:14:42.140: INFO: Waiting up to 1m0s for all nodes to be ready Dec 17 21:15:42.183: INFO: Waiting for terminating namespaces to be deleted... [It] removing taint cancels eviction [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 17 21:15:42.192: INFO: Starting informer... STEP: Starting pod... Dec 17 21:15:42.236: INFO: Pod is running on jerma-node. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting short time to make sure Pod is queued for deletion Dec 17 21:15:42.340: INFO: Pod wasn't evicted. Proceeding Dec 17 21:15:42.341: INFO: Removing taint from Node STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting some time to make sure that toleration time passed. Dec 17 21:16:57.437: INFO: Pod wasn't evicted. Test successful [AfterEach] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:16:57.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-single-pod-4257" for this suite. Dec 17 21:17:25.474: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:17:25.631: INFO: namespace taint-single-pod-4257 deletion completed in 28.183323525s • [SLOW TEST:163.610 seconds] [sig-scheduling] NoExecuteTaintManager Single Pod [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 removing taint cancels eviction [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:17:25.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating the pod Dec 17 21:17:34.375: INFO: Successfully updated pod "labelsupdate13384179-0c2b-4a66-9e25-804d7ff315e9" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:17:36.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-369" for this suite. Dec 17 21:18:04.497: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:18:04.646: INFO: namespace downward-api-369 deletion completed in 28.179048415s • [SLOW TEST:39.014 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:18:04.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Performing setup for networking test in namespace pod-network-test-5142 STEP: creating a selector STEP: Creating the service pods in kubernetes Dec 17 21:18:04.870: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods Dec 17 21:18:45.243: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.32.0.4 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5142 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 21:18:45.243: INFO: >>> kubeConfig: /root/.kube/config Dec 17 21:18:46.527: INFO: Found all expected endpoints: [netserver-0] Dec 17 21:18:46.537: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.44.0.1 8081 | grep -v '^\s*$'] Namespace:pod-network-test-5142 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 21:18:46.538: INFO: >>> kubeConfig: /root/.kube/config Dec 17 21:18:47.838: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:18:47.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5142" for this suite. Dec 17 21:19:01.910: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:19:02.053: INFO: namespace pod-network-test-5142 deletion completed in 14.197900487s • [SLOW TEST:57.406 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:19:02.054: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready Dec 17 21:19:02.503: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set Dec 17 21:19:04.526: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214342, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214342, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214342, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214342, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 21:19:06.696: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214342, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214342, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214342, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214342, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} Dec 17 21:19:08.575: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214342, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214342, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214342, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712214342, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint Dec 17 21:19:11.712: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:19:12.435: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4846" for this suite. Dec 17 21:19:18.485: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:19:18.613: INFO: namespace webhook-4846 deletion completed in 6.171636404s STEP: Destroying namespace "webhook-4846-markers" for this suite. Dec 17 21:19:24.725: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:19:24.880: INFO: namespace webhook-4846-markers deletion completed in 6.266296242s [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103 • [SLOW TEST:22.840 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:19:24.895: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating secret with name secret-test-a4d6756c-db55-4e7a-b311-30ebb7bf1439 STEP: Creating a pod to test consume secrets Dec 17 21:19:25.077: INFO: Waiting up to 5m0s for pod "pod-secrets-02cc0526-8f01-425a-b90e-eda7a9d36b87" in namespace "secrets-6465" to be "success or failure" Dec 17 21:19:25.141: INFO: Pod "pod-secrets-02cc0526-8f01-425a-b90e-eda7a9d36b87": Phase="Pending", Reason="", readiness=false. Elapsed: 63.582185ms Dec 17 21:19:27.152: INFO: Pod "pod-secrets-02cc0526-8f01-425a-b90e-eda7a9d36b87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.074538709s Dec 17 21:19:29.160: INFO: Pod "pod-secrets-02cc0526-8f01-425a-b90e-eda7a9d36b87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.082960888s Dec 17 21:19:31.170: INFO: Pod "pod-secrets-02cc0526-8f01-425a-b90e-eda7a9d36b87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.092828496s Dec 17 21:19:33.183: INFO: Pod "pod-secrets-02cc0526-8f01-425a-b90e-eda7a9d36b87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.10579041s STEP: Saw pod success Dec 17 21:19:33.183: INFO: Pod "pod-secrets-02cc0526-8f01-425a-b90e-eda7a9d36b87" satisfied condition "success or failure" Dec 17 21:19:33.195: INFO: Trying to get logs from node jerma-node pod pod-secrets-02cc0526-8f01-425a-b90e-eda7a9d36b87 container secret-volume-test: STEP: delete the pod Dec 17 21:19:33.425: INFO: Waiting for pod pod-secrets-02cc0526-8f01-425a-b90e-eda7a9d36b87 to disappear Dec 17 21:19:33.433: INFO: Pod pod-secrets-02cc0526-8f01-425a-b90e-eda7a9d36b87 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:19:33.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6465" for this suite. Dec 17 21:19:39.467: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:19:39.594: INFO: namespace secrets-6465 deletion completed in 6.153768781s • [SLOW TEST:14.700 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:19:39.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set Dec 17 21:19:47.872: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:19:48.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3839" for this suite. Dec 17 21:19:54.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:19:54.141: INFO: namespace container-runtime-3839 deletion completed in 6.116476502s • [SLOW TEST:14.546 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 on terminated container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132 should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:19:54.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false Dec 17 21:20:14.302: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5782 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 21:20:14.302: INFO: >>> kubeConfig: /root/.kube/config Dec 17 21:20:14.559: INFO: Exec stderr: "" Dec 17 21:20:14.560: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5782 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 21:20:14.560: INFO: >>> kubeConfig: /root/.kube/config Dec 17 21:20:14.725: INFO: Exec stderr: "" Dec 17 21:20:14.726: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5782 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 21:20:14.726: INFO: >>> kubeConfig: /root/.kube/config Dec 17 21:20:14.997: INFO: Exec stderr: "" Dec 17 21:20:14.998: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5782 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 21:20:14.998: INFO: >>> kubeConfig: /root/.kube/config Dec 17 21:20:15.300: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount Dec 17 21:20:15.300: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5782 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 21:20:15.300: INFO: >>> kubeConfig: /root/.kube/config Dec 17 21:20:15.482: INFO: Exec stderr: "" Dec 17 21:20:15.482: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5782 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 21:20:15.482: INFO: >>> kubeConfig: /root/.kube/config Dec 17 21:20:15.665: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true Dec 17 21:20:15.665: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5782 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 21:20:15.665: INFO: >>> kubeConfig: /root/.kube/config Dec 17 21:20:15.816: INFO: Exec stderr: "" Dec 17 21:20:15.816: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5782 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 21:20:15.817: INFO: >>> kubeConfig: /root/.kube/config Dec 17 21:20:15.995: INFO: Exec stderr: "" Dec 17 21:20:15.996: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5782 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 21:20:15.996: INFO: >>> kubeConfig: /root/.kube/config Dec 17 21:20:16.188: INFO: Exec stderr: "" Dec 17 21:20:16.188: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5782 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} Dec 17 21:20:16.188: INFO: >>> kubeConfig: /root/.kube/config Dec 17 21:20:16.374: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:20:16.375: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5782" for this suite. Dec 17 21:21:08.432: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:21:08.618: INFO: namespace e2e-kubelet-etc-hosts-5782 deletion completed in 52.217452472s • [SLOW TEST:74.477 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:21:08.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:21:25.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4705" for this suite. Dec 17 21:21:33.869: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:21:34.011: INFO: namespace resourcequota-4705 deletion completed in 8.17955837s • [SLOW TEST:25.392 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSS ------------------------------ [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] evicts pods with minTolerationSeconds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:21:34.012: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename taint-multiple-pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/taints.go:345 Dec 17 21:21:34.100: INFO: Waiting up to 1m0s for all nodes to be ready Dec 17 21:22:34.126: INFO: Waiting for terminating namespaces to be deleted... [It] evicts pods with minTolerationSeconds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 17 21:22:34.132: INFO: Starting informer... STEP: Starting pods... Dec 17 21:22:34.358: INFO: Pod1 is running on jerma-node. Tainting Node Dec 17 21:22:44.673: INFO: Pod2 is running on jerma-node. Tainting Node STEP: Trying to apply a taint on the Node STEP: verifying the node has the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute STEP: Waiting for Pod1 and Pod2 to be deleted Dec 17 21:22:54.700: INFO: Noticed Pod "taint-eviction-b1" gets evicted. Dec 17 21:23:16.690: INFO: Noticed Pod "taint-eviction-b2" gets evicted. STEP: verifying the node doesn't have the taint kubernetes.io/e2e-evict-taint-key=evictTaintVal:NoExecute [AfterEach] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:23:16.728: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "taint-multiple-pods-2757" for this suite. Dec 17 21:23:22.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:23:23.039: INFO: namespace taint-multiple-pods-2757 deletion completed in 6.305009899s • [SLOW TEST:109.027 seconds] [sig-scheduling] NoExecuteTaintManager Multiple Pods [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23 evicts pods with minTolerationSeconds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:23:23.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1192 STEP: creating the pod Dec 17 21:23:23.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2056' Dec 17 21:23:25.912: INFO: stderr: "" Dec 17 21:23:25.912: INFO: stdout: "pod/pause created\n" Dec 17 21:23:25.912: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] Dec 17 21:23:25.912: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-2056" to be "running and ready" Dec 17 21:23:25.926: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 13.546874ms Dec 17 21:23:27.940: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027611767s Dec 17 21:23:29.948: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036097567s Dec 17 21:23:31.963: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050519221s Dec 17 21:23:33.979: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 8.066295858s Dec 17 21:23:33.979: INFO: Pod "pause" satisfied condition "running and ready" Dec 17 21:23:33.979: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: adding the label testing-label with value testing-label-value to a pod Dec 17 21:23:33.979: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-2056' Dec 17 21:23:34.257: INFO: stderr: "" Dec 17 21:23:34.257: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value Dec 17 21:23:34.258: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2056' Dec 17 21:23:34.373: INFO: stderr: "" Dec 17 21:23:34.374: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s testing-label-value\n" STEP: removing the label testing-label of a pod Dec 17 21:23:34.374: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-2056' Dec 17 21:23:34.494: INFO: stderr: "" Dec 17 21:23:34.494: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label Dec 17 21:23:34.495: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-2056' Dec 17 21:23:34.636: INFO: stderr: "" Dec 17 21:23:34.637: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 9s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1199 STEP: using delete to clean up resources Dec 17 21:23:34.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2056' Dec 17 21:23:34.769: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" Dec 17 21:23:34.769: INFO: stdout: "pod \"pause\" force deleted\n" Dec 17 21:23:34.769: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-2056' Dec 17 21:23:34.934: INFO: stderr: "No resources found in kubectl-2056 namespace.\n" Dec 17 21:23:34.934: INFO: stdout: "" Dec 17 21:23:34.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-2056 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' Dec 17 21:23:35.065: INFO: stderr: "" Dec 17 21:23:35.065: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:23:35.066: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2056" for this suite. Dec 17 21:23:41.094: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:23:41.203: INFO: namespace kubectl-2056 deletion completed in 6.131973301s • [SLOW TEST:18.163 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1189 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:23:41.203: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod pod-subpath-test-projected-424x STEP: Creating a pod to test atomic-volume-subpath Dec 17 21:23:42.545: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-424x" in namespace "subpath-9411" to be "success or failure" Dec 17 21:23:42.684: INFO: Pod "pod-subpath-test-projected-424x": Phase="Pending", Reason="", readiness=false. Elapsed: 138.205209ms Dec 17 21:23:44.846: INFO: Pod "pod-subpath-test-projected-424x": Phase="Pending", Reason="", readiness=false. Elapsed: 2.300346889s Dec 17 21:23:46.860: INFO: Pod "pod-subpath-test-projected-424x": Phase="Pending", Reason="", readiness=false. Elapsed: 4.314370293s Dec 17 21:23:48.875: INFO: Pod "pod-subpath-test-projected-424x": Phase="Running", Reason="", readiness=true. Elapsed: 6.329109131s Dec 17 21:23:50.883: INFO: Pod "pod-subpath-test-projected-424x": Phase="Running", Reason="", readiness=true. Elapsed: 8.337316768s Dec 17 21:23:52.890: INFO: Pod "pod-subpath-test-projected-424x": Phase="Running", Reason="", readiness=true. Elapsed: 10.344085258s Dec 17 21:23:54.900: INFO: Pod "pod-subpath-test-projected-424x": Phase="Running", Reason="", readiness=true. Elapsed: 12.354856646s Dec 17 21:23:56.909: INFO: Pod "pod-subpath-test-projected-424x": Phase="Running", Reason="", readiness=true. Elapsed: 14.362938861s Dec 17 21:23:58.916: INFO: Pod "pod-subpath-test-projected-424x": Phase="Running", Reason="", readiness=true. Elapsed: 16.370653423s Dec 17 21:24:00.923: INFO: Pod "pod-subpath-test-projected-424x": Phase="Running", Reason="", readiness=true. Elapsed: 18.377122133s Dec 17 21:24:02.930: INFO: Pod "pod-subpath-test-projected-424x": Phase="Running", Reason="", readiness=true. Elapsed: 20.38421175s Dec 17 21:24:04.939: INFO: Pod "pod-subpath-test-projected-424x": Phase="Running", Reason="", readiness=true. Elapsed: 22.392915471s Dec 17 21:24:06.950: INFO: Pod "pod-subpath-test-projected-424x": Phase="Running", Reason="", readiness=true. Elapsed: 24.404131808s Dec 17 21:24:09.000: INFO: Pod "pod-subpath-test-projected-424x": Phase="Running", Reason="", readiness=true. Elapsed: 26.453955593s Dec 17 21:24:11.007: INFO: Pod "pod-subpath-test-projected-424x": Phase="Succeeded", Reason="", readiness=false. Elapsed: 28.461046553s STEP: Saw pod success Dec 17 21:24:11.007: INFO: Pod "pod-subpath-test-projected-424x" satisfied condition "success or failure" Dec 17 21:24:11.010: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-projected-424x container test-container-subpath-projected-424x: STEP: delete the pod Dec 17 21:24:11.068: INFO: Waiting for pod pod-subpath-test-projected-424x to disappear Dec 17 21:24:11.083: INFO: Pod pod-subpath-test-projected-424x no longer exists STEP: Deleting pod pod-subpath-test-projected-424x Dec 17 21:24:11.084: INFO: Deleting pod "pod-subpath-test-projected-424x" in namespace "subpath-9411" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:24:11.087: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9411" for this suite. Dec 17 21:24:17.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:24:17.380: INFO: namespace subpath-9411 deletion completed in 6.22462699s • [SLOW TEST:36.177 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:24:17.380: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1668 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: running the image docker.io/library/httpd:2.4.38-alpine Dec 17 21:24:17.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8050' Dec 17 21:24:17.681: INFO: stderr: "" Dec 17 21:24:17.682: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1673 Dec 17 21:24:17.687: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-8050' Dec 17 21:24:26.694: INFO: stderr: "" Dec 17 21:24:26.694: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:24:26.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8050" for this suite. Dec 17 21:24:32.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:24:32.914: INFO: namespace kubectl-8050 deletion completed in 6.213026245s • [SLOW TEST:15.534 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1664 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:24:32.914: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1403 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: running the image docker.io/library/httpd:2.4.38-alpine Dec 17 21:24:32.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9291' Dec 17 21:24:33.168: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" Dec 17 21:24:33.168: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1409 Dec 17 21:24:35.250: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9291' Dec 17 21:24:35.525: INFO: stderr: "" Dec 17 21:24:35.526: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:24:35.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9291" for this suite. Dec 17 21:24:41.574: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:24:41.734: INFO: namespace kubectl-9291 deletion completed in 6.192290182s • [SLOW TEST:8.820 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1397 should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ S ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:24:41.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 17 21:24:41.894: INFO: Create a RollingUpdate DaemonSet Dec 17 21:24:41.900: INFO: Check that daemon pods launch on every node of the cluster Dec 17 21:24:41.925: INFO: Number of nodes with available pods: 0 Dec 17 21:24:41.925: INFO: Node jerma-node is running more than one daemon pod Dec 17 21:24:42.953: INFO: Number of nodes with available pods: 0 Dec 17 21:24:42.953: INFO: Node jerma-node is running more than one daemon pod Dec 17 21:24:44.502: INFO: Number of nodes with available pods: 0 Dec 17 21:24:44.502: INFO: Node jerma-node is running more than one daemon pod Dec 17 21:24:45.479: INFO: Number of nodes with available pods: 0 Dec 17 21:24:45.480: INFO: Node jerma-node is running more than one daemon pod Dec 17 21:24:45.953: INFO: Number of nodes with available pods: 0 Dec 17 21:24:45.953: INFO: Node jerma-node is running more than one daemon pod Dec 17 21:24:46.940: INFO: Number of nodes with available pods: 0 Dec 17 21:24:46.940: INFO: Node jerma-node is running more than one daemon pod Dec 17 21:24:49.719: INFO: Number of nodes with available pods: 0 Dec 17 21:24:49.719: INFO: Node jerma-node is running more than one daemon pod Dec 17 21:24:50.113: INFO: Number of nodes with available pods: 0 Dec 17 21:24:50.113: INFO: Node jerma-node is running more than one daemon pod Dec 17 21:24:50.939: INFO: Number of nodes with available pods: 0 Dec 17 21:24:50.939: INFO: Node jerma-node is running more than one daemon pod Dec 17 21:24:51.962: INFO: Number of nodes with available pods: 2 Dec 17 21:24:51.962: INFO: Number of running nodes: 2, number of available pods: 2 Dec 17 21:24:51.962: INFO: Update the DaemonSet to trigger a rollout Dec 17 21:24:51.981: INFO: Updating DaemonSet daemon-set Dec 17 21:25:07.033: INFO: Roll back the DaemonSet before rollout is complete Dec 17 21:25:07.049: INFO: Updating DaemonSet daemon-set Dec 17 21:25:07.049: INFO: Make sure DaemonSet rollback is complete Dec 17 21:25:07.075: INFO: Wrong image for pod: daemon-set-zrplx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Dec 17 21:25:07.075: INFO: Pod daemon-set-zrplx is not available Dec 17 21:25:08.099: INFO: Wrong image for pod: daemon-set-zrplx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Dec 17 21:25:08.100: INFO: Pod daemon-set-zrplx is not available Dec 17 21:25:09.097: INFO: Wrong image for pod: daemon-set-zrplx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Dec 17 21:25:09.097: INFO: Pod daemon-set-zrplx is not available Dec 17 21:25:10.090: INFO: Wrong image for pod: daemon-set-zrplx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Dec 17 21:25:10.091: INFO: Pod daemon-set-zrplx is not available Dec 17 21:25:11.130: INFO: Wrong image for pod: daemon-set-zrplx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Dec 17 21:25:11.131: INFO: Pod daemon-set-zrplx is not available Dec 17 21:25:12.090: INFO: Wrong image for pod: daemon-set-zrplx. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. Dec 17 21:25:12.091: INFO: Pod daemon-set-zrplx is not available Dec 17 21:25:13.091: INFO: Pod daemon-set-q4dr8 is not available [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5459, will wait for the garbage collector to delete the pods Dec 17 21:25:13.182: INFO: Deleting DaemonSet.extensions daemon-set took: 20.847582ms Dec 17 21:25:13.682: INFO: Terminating DaemonSet.extensions daemon-set pods took: 500.785972ms Dec 17 21:25:20.989: INFO: Number of nodes with available pods: 0 Dec 17 21:25:20.989: INFO: Number of running nodes: 0, number of available pods: 0 Dec 17 21:25:20.999: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5459/daemonsets","resourceVersion":"9138689"},"items":null} Dec 17 21:25:21.005: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5459/pods","resourceVersion":"9138689"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:25:21.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5459" for this suite. Dec 17 21:25:27.047: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:25:27.145: INFO: namespace daemonsets-5459 deletion completed in 6.120871798s • [SLOW TEST:45.411 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:25:27.147: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating pod pod-subpath-test-secret-2mzl STEP: Creating a pod to test atomic-volume-subpath Dec 17 21:25:27.218: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-2mzl" in namespace "subpath-5569" to be "success or failure" Dec 17 21:25:27.222: INFO: Pod "pod-subpath-test-secret-2mzl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.316695ms Dec 17 21:25:29.230: INFO: Pod "pod-subpath-test-secret-2mzl": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012294535s Dec 17 21:25:31.241: INFO: Pod "pod-subpath-test-secret-2mzl": Phase="Pending", Reason="", readiness=false. Elapsed: 4.023435484s Dec 17 21:25:33.250: INFO: Pod "pod-subpath-test-secret-2mzl": Phase="Pending", Reason="", readiness=false. Elapsed: 6.031963444s Dec 17 21:25:35.259: INFO: Pod "pod-subpath-test-secret-2mzl": Phase="Running", Reason="", readiness=true. Elapsed: 8.040659305s Dec 17 21:25:37.268: INFO: Pod "pod-subpath-test-secret-2mzl": Phase="Running", Reason="", readiness=true. Elapsed: 10.050495846s Dec 17 21:25:39.279: INFO: Pod "pod-subpath-test-secret-2mzl": Phase="Running", Reason="", readiness=true. Elapsed: 12.061019351s Dec 17 21:25:41.293: INFO: Pod "pod-subpath-test-secret-2mzl": Phase="Running", Reason="", readiness=true. Elapsed: 14.07527123s Dec 17 21:25:43.302: INFO: Pod "pod-subpath-test-secret-2mzl": Phase="Running", Reason="", readiness=true. Elapsed: 16.084415349s Dec 17 21:25:45.324: INFO: Pod "pod-subpath-test-secret-2mzl": Phase="Running", Reason="", readiness=true. Elapsed: 18.10597145s Dec 17 21:25:47.336: INFO: Pod "pod-subpath-test-secret-2mzl": Phase="Running", Reason="", readiness=true. Elapsed: 20.11793092s Dec 17 21:25:49.346: INFO: Pod "pod-subpath-test-secret-2mzl": Phase="Running", Reason="", readiness=true. Elapsed: 22.128292481s Dec 17 21:25:51.354: INFO: Pod "pod-subpath-test-secret-2mzl": Phase="Running", Reason="", readiness=true. Elapsed: 24.136034767s Dec 17 21:25:53.363: INFO: Pod "pod-subpath-test-secret-2mzl": Phase="Running", Reason="", readiness=true. Elapsed: 26.145028151s Dec 17 21:25:55.371: INFO: Pod "pod-subpath-test-secret-2mzl": Phase="Running", Reason="", readiness=true. Elapsed: 28.152998685s Dec 17 21:25:57.380: INFO: Pod "pod-subpath-test-secret-2mzl": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.162501816s STEP: Saw pod success Dec 17 21:25:57.381: INFO: Pod "pod-subpath-test-secret-2mzl" satisfied condition "success or failure" Dec 17 21:25:57.385: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-secret-2mzl container test-container-subpath-secret-2mzl: STEP: delete the pod Dec 17 21:25:57.463: INFO: Waiting for pod pod-subpath-test-secret-2mzl to disappear Dec 17 21:25:57.469: INFO: Pod pod-subpath-test-secret-2mzl no longer exists STEP: Deleting pod pod-subpath-test-secret-2mzl Dec 17 21:25:57.469: INFO: Deleting pod "pod-subpath-test-secret-2mzl" in namespace "subpath-5569" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:25:57.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-5569" for this suite. Dec 17 21:26:03.563: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:26:03.697: INFO: namespace subpath-5569 deletion completed in 6.212702598s • [SLOW TEST:36.550 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:26:03.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook Dec 17 21:28:34.077: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 17 21:28:34.086: INFO: Pod pod-with-poststart-exec-hook still exists Dec 17 21:28:36.087: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 17 21:28:36.097: INFO: Pod pod-with-poststart-exec-hook still exists Dec 17 21:28:38.086: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 17 21:28:38.094: INFO: Pod pod-with-poststart-exec-hook still exists Dec 17 21:28:40.086: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 17 21:28:40.099: INFO: Pod pod-with-poststart-exec-hook still exists Dec 17 21:28:42.086: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 17 21:28:42.094: INFO: Pod pod-with-poststart-exec-hook still exists Dec 17 21:28:44.086: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 17 21:28:44.094: INFO: Pod pod-with-poststart-exec-hook still exists Dec 17 21:28:46.086: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 17 21:28:46.094: INFO: Pod pod-with-poststart-exec-hook still exists Dec 17 21:28:48.086: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear Dec 17 21:28:48.098: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:28:48.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3065" for this suite. Dec 17 21:29:16.177: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:29:16.300: INFO: namespace container-lifecycle-hook-3065 deletion completed in 28.194067664s • [SLOW TEST:192.601 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:29:16.301: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating service nodeport-test with type=NodePort in namespace services-1412 STEP: creating replication controller nodeport-test in namespace services-1412 I1217 21:29:16.662382 8 runners.go:184] Created replication controller with name: nodeport-test, namespace: services-1412, replica count: 2 I1217 21:29:19.714240 8 runners.go:184] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 21:29:22.714721 8 runners.go:184] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 21:29:25.715155 8 runners.go:184] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I1217 21:29:28.716341 8 runners.go:184] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady Dec 17 21:29:28.716: INFO: Creating new exec pod Dec 17 21:29:37.806: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1412 execpodjl8gd -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' Dec 17 21:29:40.773: INFO: stderr: "+ nc -zv -t -w 2 nodeport-test 80\nConnection to nodeport-test 80 port [tcp/http] succeeded!\n" Dec 17 21:29:40.773: INFO: stdout: "" Dec 17 21:29:40.776: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1412 execpodjl8gd -- /bin/sh -x -c nc -zv -t -w 2 10.106.248.244 80' Dec 17 21:29:41.180: INFO: stderr: "+ nc -zv -t -w 2 10.106.248.244 80\nConnection to 10.106.248.244 80 port [tcp/http] succeeded!\n" Dec 17 21:29:41.180: INFO: stdout: "" Dec 17 21:29:41.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1412 execpodjl8gd -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.170 31074' Dec 17 21:29:41.561: INFO: stderr: "+ nc -zv -t -w 2 10.96.2.170 31074\nConnection to 10.96.2.170 31074 port [tcp/31074] succeeded!\n" Dec 17 21:29:41.561: INFO: stdout: "" Dec 17 21:29:41.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1412 execpodjl8gd -- /bin/sh -x -c nc -zv -t -w 2 10.96.3.35 31074' Dec 17 21:29:41.923: INFO: stderr: "+ nc -zv -t -w 2 10.96.3.35 31074\nConnection to 10.96.3.35 31074 port [tcp/31074] succeeded!\n" Dec 17 21:29:41.924: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:29:41.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1412" for this suite. Dec 17 21:29:47.966: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:29:48.067: INFO: namespace services-1412 deletion completed in 6.135756082s [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95 • [SLOW TEST:31.767 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:29:48.068: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating projection with secret that has name projected-secret-test-map-5b524f69-ea3f-4012-b0cb-bfd3e8131c5f STEP: Creating a pod to test consume secrets Dec 17 21:29:48.239: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-b5b3484c-028e-4365-8b73-7f0ed1200a68" in namespace "projected-8665" to be "success or failure" Dec 17 21:29:48.248: INFO: Pod "pod-projected-secrets-b5b3484c-028e-4365-8b73-7f0ed1200a68": Phase="Pending", Reason="", readiness=false. Elapsed: 9.008705ms Dec 17 21:29:50.264: INFO: Pod "pod-projected-secrets-b5b3484c-028e-4365-8b73-7f0ed1200a68": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024436964s Dec 17 21:29:52.285: INFO: Pod "pod-projected-secrets-b5b3484c-028e-4365-8b73-7f0ed1200a68": Phase="Pending", Reason="", readiness=false. Elapsed: 4.046020758s Dec 17 21:29:54.826: INFO: Pod "pod-projected-secrets-b5b3484c-028e-4365-8b73-7f0ed1200a68": Phase="Pending", Reason="", readiness=false. Elapsed: 6.5867129s Dec 17 21:29:56.833: INFO: Pod "pod-projected-secrets-b5b3484c-028e-4365-8b73-7f0ed1200a68": Phase="Pending", Reason="", readiness=false. Elapsed: 8.593155244s Dec 17 21:29:58.841: INFO: Pod "pod-projected-secrets-b5b3484c-028e-4365-8b73-7f0ed1200a68": Phase="Pending", Reason="", readiness=false. Elapsed: 10.601411541s Dec 17 21:30:00.882: INFO: Pod "pod-projected-secrets-b5b3484c-028e-4365-8b73-7f0ed1200a68": Phase="Pending", Reason="", readiness=false. Elapsed: 12.642525118s Dec 17 21:30:02.893: INFO: Pod "pod-projected-secrets-b5b3484c-028e-4365-8b73-7f0ed1200a68": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.653729802s STEP: Saw pod success Dec 17 21:30:02.893: INFO: Pod "pod-projected-secrets-b5b3484c-028e-4365-8b73-7f0ed1200a68" satisfied condition "success or failure" Dec 17 21:30:02.900: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-b5b3484c-028e-4365-8b73-7f0ed1200a68 container projected-secret-volume-test: STEP: delete the pod Dec 17 21:30:03.042: INFO: Waiting for pod pod-projected-secrets-b5b3484c-028e-4365-8b73-7f0ed1200a68 to disappear Dec 17 21:30:03.048: INFO: Pod pod-projected-secrets-b5b3484c-028e-4365-8b73-7f0ed1200a68 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:30:03.048: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8665" for this suite. Dec 17 21:30:09.077: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:30:09.231: INFO: namespace projected-8665 deletion completed in 6.173637478s • [SLOW TEST:21.163 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:30:09.232: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:30:09.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8058" for this suite. Dec 17 21:30:15.423: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:30:15.626: INFO: namespace resourcequota-8058 deletion completed in 6.228009165s • [SLOW TEST:6.395 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:30:15.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:30:23.837: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-103" for this suite. Dec 17 21:31:07.873: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:31:07.990: INFO: namespace kubelet-test-103 deletion completed in 44.145362025s • [SLOW TEST:52.363 seconds] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 when scheduling a read only busybox container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:187 should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:31:07.992: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9215.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9215.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers Dec 17 21:31:22.115: INFO: Unable to read wheezy_udp@kubernetes.default.svc.cluster.local from pod dns-9215/dns-test-4e7d8587-4ae7-4bd3-b65d-af9994c6ce3c: the server could not find the requested resource (get pods dns-test-4e7d8587-4ae7-4bd3-b65d-af9994c6ce3c) Dec 17 21:31:22.132: INFO: Unable to read wheezy_tcp@kubernetes.default.svc.cluster.local from pod dns-9215/dns-test-4e7d8587-4ae7-4bd3-b65d-af9994c6ce3c: the server could not find the requested resource (get pods dns-test-4e7d8587-4ae7-4bd3-b65d-af9994c6ce3c) Dec 17 21:31:22.142: INFO: Unable to read wheezy_udp@PodARecord from pod dns-9215/dns-test-4e7d8587-4ae7-4bd3-b65d-af9994c6ce3c: the server could not find the requested resource (get pods dns-test-4e7d8587-4ae7-4bd3-b65d-af9994c6ce3c) Dec 17 21:31:22.155: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-9215/dns-test-4e7d8587-4ae7-4bd3-b65d-af9994c6ce3c: the server could not find the requested resource (get pods dns-test-4e7d8587-4ae7-4bd3-b65d-af9994c6ce3c) Dec 17 21:31:22.191: INFO: Lookups using dns-9215/dns-test-4e7d8587-4ae7-4bd3-b65d-af9994c6ce3c failed for: [wheezy_udp@kubernetes.default.svc.cluster.local wheezy_tcp@kubernetes.default.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord] Dec 17 21:31:27.254: INFO: DNS probes using dns-9215/dns-test-4e7d8587-4ae7-4bd3-b65d-af9994c6ce3c succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:31:28.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9215" for this suite. Dec 17 21:31:34.565: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:31:34.749: INFO: namespace dns-9215 deletion completed in 6.359722692s • [SLOW TEST:26.757 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:31:34.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: creating the pod Dec 17 21:31:34.827: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:31:47.978: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-5538" for this suite. Dec 17 21:31:54.011: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:31:54.104: INFO: namespace init-container-5538 deletion completed in 6.114804706s • [SLOW TEST:19.354 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:31:54.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 STEP: Creating configMap with name configmap-projected-all-test-volume-e8cb89b0-00d1-4887-9bfb-39483db67013 STEP: Creating secret with name secret-projected-all-test-volume-b293b5e8-37ee-4843-9837-152bc9276b6f STEP: Creating a pod to test Check all projections for projected volume plugin Dec 17 21:31:54.194: INFO: Waiting up to 5m0s for pod "projected-volume-3af24e85-8ddf-4b27-a21a-1c5e0b2f620f" in namespace "projected-564" to be "success or failure" Dec 17 21:31:54.222: INFO: Pod "projected-volume-3af24e85-8ddf-4b27-a21a-1c5e0b2f620f": Phase="Pending", Reason="", readiness=false. Elapsed: 28.333888ms Dec 17 21:31:56.233: INFO: Pod "projected-volume-3af24e85-8ddf-4b27-a21a-1c5e0b2f620f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039001843s Dec 17 21:31:58.246: INFO: Pod "projected-volume-3af24e85-8ddf-4b27-a21a-1c5e0b2f620f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.052643918s Dec 17 21:32:00.253: INFO: Pod "projected-volume-3af24e85-8ddf-4b27-a21a-1c5e0b2f620f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.059167561s Dec 17 21:32:02.268: INFO: Pod "projected-volume-3af24e85-8ddf-4b27-a21a-1c5e0b2f620f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.073867528s STEP: Saw pod success Dec 17 21:32:02.268: INFO: Pod "projected-volume-3af24e85-8ddf-4b27-a21a-1c5e0b2f620f" satisfied condition "success or failure" Dec 17 21:32:02.275: INFO: Trying to get logs from node jerma-node pod projected-volume-3af24e85-8ddf-4b27-a21a-1c5e0b2f620f container projected-all-volume-test: STEP: delete the pod Dec 17 21:32:02.371: INFO: Waiting for pod projected-volume-3af24e85-8ddf-4b27-a21a-1c5e0b2f620f to disappear Dec 17 21:32:02.377: INFO: Pod projected-volume-3af24e85-8ddf-4b27-a21a-1c5e0b2f620f no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 Dec 17 21:32:02.377: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-564" for this suite. Dec 17 21:32:08.420: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered Dec 17 21:32:08.548: INFO: namespace projected-564 deletion completed in 6.162918519s • [SLOW TEST:14.444 seconds] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_combined.go:32 should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 ------------------------------ SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client Dec 17 21:32:08.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698 Dec 17 21:32:08.770: INFO: (0) /api/v1/nodes/jerma-node:10250/proxy/logs/:
alternatives.log
alternatives.l... (200; 35.793166ms)
Dec 17 21:32:08.818: INFO: (1) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 46.641934ms)
Dec 17 21:32:08.855: INFO: (2) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 35.568691ms)
Dec 17 21:32:08.861: INFO: (3) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.207555ms)
Dec 17 21:32:08.866: INFO: (4) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.725572ms)
Dec 17 21:32:08.870: INFO: (5) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.804197ms)
Dec 17 21:32:08.873: INFO: (6) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.174534ms)
Dec 17 21:32:08.877: INFO: (7) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.314007ms)
Dec 17 21:32:08.883: INFO: (8) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.278225ms)
Dec 17 21:32:08.891: INFO: (9) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 8.13285ms)
Dec 17 21:32:08.902: INFO: (10) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 11.572618ms)
Dec 17 21:32:08.944: INFO: (11) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 41.49114ms)
Dec 17 21:32:08.952: INFO: (12) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 7.313204ms)
Dec 17 21:32:08.958: INFO: (13) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 6.500773ms)
Dec 17 21:32:08.963: INFO: (14) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.554405ms)
Dec 17 21:32:08.967: INFO: (15) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.871224ms)
Dec 17 21:32:08.972: INFO: (16) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.542774ms)
Dec 17 21:32:08.975: INFO: (17) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.685776ms)
Dec 17 21:32:08.980: INFO: (18) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.580752ms)
Dec 17 21:32:08.983: INFO: (19) /api/v1/nodes/jerma-node:10250/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.282233ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:32:08.983: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6995" for this suite.
Dec 17 21:32:15.035: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:32:15.213: INFO: namespace proxy-6995 deletion completed in 6.222679662s

• [SLOW TEST:6.663 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy logs on node with explicit kubelet port using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:32:15.213: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating projection with secret that has name projected-secret-test-61b3ca33-89bc-4b3c-925e-51650c97e54c
STEP: Creating a pod to test consume secrets
Dec 17 21:32:15.312: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3714ca6f-9500-4848-b0d9-f9a091f6bbcd" in namespace "projected-898" to be "success or failure"
Dec 17 21:32:15.320: INFO: Pod "pod-projected-secrets-3714ca6f-9500-4848-b0d9-f9a091f6bbcd": Phase="Pending", Reason="", readiness=false. Elapsed: 7.625549ms
Dec 17 21:32:17.350: INFO: Pod "pod-projected-secrets-3714ca6f-9500-4848-b0d9-f9a091f6bbcd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.037245452s
Dec 17 21:32:19.360: INFO: Pod "pod-projected-secrets-3714ca6f-9500-4848-b0d9-f9a091f6bbcd": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047804676s
Dec 17 21:32:21.369: INFO: Pod "pod-projected-secrets-3714ca6f-9500-4848-b0d9-f9a091f6bbcd": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056946048s
Dec 17 21:32:23.380: INFO: Pod "pod-projected-secrets-3714ca6f-9500-4848-b0d9-f9a091f6bbcd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067773578s
STEP: Saw pod success
Dec 17 21:32:23.381: INFO: Pod "pod-projected-secrets-3714ca6f-9500-4848-b0d9-f9a091f6bbcd" satisfied condition "success or failure"
Dec 17 21:32:23.385: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-3714ca6f-9500-4848-b0d9-f9a091f6bbcd container projected-secret-volume-test: 
STEP: delete the pod
Dec 17 21:32:23.497: INFO: Waiting for pod pod-projected-secrets-3714ca6f-9500-4848-b0d9-f9a091f6bbcd to disappear
Dec 17 21:32:23.528: INFO: Pod pod-projected-secrets-3714ca6f-9500-4848-b0d9-f9a091f6bbcd no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:32:23.528: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-898" for this suite.
Dec 17 21:32:29.573: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:32:29.671: INFO: namespace projected-898 deletion completed in 6.135970342s

• [SLOW TEST:14.458 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:32:29.671: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with downward pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating pod pod-subpath-test-downwardapi-qgqr
STEP: Creating a pod to test atomic-volume-subpath
Dec 17 21:32:29.821: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-qgqr" in namespace "subpath-350" to be "success or failure"
Dec 17 21:32:29.867: INFO: Pod "pod-subpath-test-downwardapi-qgqr": Phase="Pending", Reason="", readiness=false. Elapsed: 45.36164ms
Dec 17 21:32:31.888: INFO: Pod "pod-subpath-test-downwardapi-qgqr": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066240121s
Dec 17 21:32:33.898: INFO: Pod "pod-subpath-test-downwardapi-qgqr": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076405309s
Dec 17 21:32:35.908: INFO: Pod "pod-subpath-test-downwardapi-qgqr": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086214929s
Dec 17 21:32:37.930: INFO: Pod "pod-subpath-test-downwardapi-qgqr": Phase="Running", Reason="", readiness=true. Elapsed: 8.108693872s
Dec 17 21:32:39.939: INFO: Pod "pod-subpath-test-downwardapi-qgqr": Phase="Running", Reason="", readiness=true. Elapsed: 10.117601151s
Dec 17 21:32:41.954: INFO: Pod "pod-subpath-test-downwardapi-qgqr": Phase="Running", Reason="", readiness=true. Elapsed: 12.132257808s
Dec 17 21:32:43.961: INFO: Pod "pod-subpath-test-downwardapi-qgqr": Phase="Running", Reason="", readiness=true. Elapsed: 14.13974723s
Dec 17 21:32:45.970: INFO: Pod "pod-subpath-test-downwardapi-qgqr": Phase="Running", Reason="", readiness=true. Elapsed: 16.148558413s
Dec 17 21:32:47.991: INFO: Pod "pod-subpath-test-downwardapi-qgqr": Phase="Running", Reason="", readiness=true. Elapsed: 18.169906161s
Dec 17 21:32:50.000: INFO: Pod "pod-subpath-test-downwardapi-qgqr": Phase="Running", Reason="", readiness=true. Elapsed: 20.178885801s
Dec 17 21:32:52.009: INFO: Pod "pod-subpath-test-downwardapi-qgqr": Phase="Running", Reason="", readiness=true. Elapsed: 22.187523541s
Dec 17 21:32:54.017: INFO: Pod "pod-subpath-test-downwardapi-qgqr": Phase="Running", Reason="", readiness=true. Elapsed: 24.195597527s
Dec 17 21:32:56.027: INFO: Pod "pod-subpath-test-downwardapi-qgqr": Phase="Running", Reason="", readiness=true. Elapsed: 26.205227119s
Dec 17 21:32:58.034: INFO: Pod "pod-subpath-test-downwardapi-qgqr": Phase="Running", Reason="", readiness=true. Elapsed: 28.212250248s
Dec 17 21:33:00.044: INFO: Pod "pod-subpath-test-downwardapi-qgqr": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.222837624s
STEP: Saw pod success
Dec 17 21:33:00.045: INFO: Pod "pod-subpath-test-downwardapi-qgqr" satisfied condition "success or failure"
Dec 17 21:33:00.049: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-downwardapi-qgqr container test-container-subpath-downwardapi-qgqr: 
STEP: delete the pod
Dec 17 21:33:00.092: INFO: Waiting for pod pod-subpath-test-downwardapi-qgqr to disappear
Dec 17 21:33:00.098: INFO: Pod pod-subpath-test-downwardapi-qgqr no longer exists
STEP: Deleting pod pod-subpath-test-downwardapi-qgqr
Dec 17 21:33:00.099: INFO: Deleting pod "pod-subpath-test-downwardapi-qgqr" in namespace "subpath-350"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:33:00.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-350" for this suite.
Dec 17 21:33:06.141: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:33:06.279: INFO: namespace subpath-350 deletion completed in 6.162029663s

• [SLOW TEST:36.608 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with downward pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:33:06.281: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 21:33:06.458: INFO: Waiting up to 5m0s for pod "downwardapi-volume-10f39c26-8d55-40e2-a849-0c3f6656b9a2" in namespace "projected-6780" to be "success or failure"
Dec 17 21:33:06.488: INFO: Pod "downwardapi-volume-10f39c26-8d55-40e2-a849-0c3f6656b9a2": Phase="Pending", Reason="", readiness=false. Elapsed: 29.411633ms
Dec 17 21:33:08.498: INFO: Pod "downwardapi-volume-10f39c26-8d55-40e2-a849-0c3f6656b9a2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039438769s
Dec 17 21:33:10.516: INFO: Pod "downwardapi-volume-10f39c26-8d55-40e2-a849-0c3f6656b9a2": Phase="Pending", Reason="", readiness=false. Elapsed: 4.057472955s
Dec 17 21:33:12.559: INFO: Pod "downwardapi-volume-10f39c26-8d55-40e2-a849-0c3f6656b9a2": Phase="Pending", Reason="", readiness=false. Elapsed: 6.100577574s
Dec 17 21:33:14.570: INFO: Pod "downwardapi-volume-10f39c26-8d55-40e2-a849-0c3f6656b9a2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.111060459s
STEP: Saw pod success
Dec 17 21:33:14.570: INFO: Pod "downwardapi-volume-10f39c26-8d55-40e2-a849-0c3f6656b9a2" satisfied condition "success or failure"
Dec 17 21:33:14.578: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-10f39c26-8d55-40e2-a849-0c3f6656b9a2 container client-container: 
STEP: delete the pod
Dec 17 21:33:14.640: INFO: Waiting for pod downwardapi-volume-10f39c26-8d55-40e2-a849-0c3f6656b9a2 to disappear
Dec 17 21:33:14.644: INFO: Pod downwardapi-volume-10f39c26-8d55-40e2-a849-0c3f6656b9a2 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:33:14.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6780" for this suite.
Dec 17 21:33:20.691: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:33:20.807: INFO: namespace projected-6780 deletion completed in 6.158158676s

• [SLOW TEST:14.526 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl replace 
  should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:33:20.809: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1704
[It] should update a single-container pod's image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 17 21:33:20.876: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-3010'
Dec 17 21:33:21.061: INFO: stderr: ""
Dec 17 21:33:21.061: INFO: stdout: "pod/e2e-test-httpd-pod created\n"
STEP: verifying the pod e2e-test-httpd-pod is running
STEP: verifying the pod e2e-test-httpd-pod was created
Dec 17 21:33:31.113: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-3010 -o json'
Dec 17 21:33:33.454: INFO: stderr: ""
Dec 17 21:33:33.454: INFO: stdout: "{\n    \"apiVersion\": \"v1\",\n    \"kind\": \"Pod\",\n    \"metadata\": {\n        \"creationTimestamp\": \"2019-12-17T21:33:21Z\",\n        \"labels\": {\n            \"run\": \"e2e-test-httpd-pod\"\n        },\n        \"name\": \"e2e-test-httpd-pod\",\n        \"namespace\": \"kubectl-3010\",\n        \"resourceVersion\": \"9139786\",\n        \"selfLink\": \"/api/v1/namespaces/kubectl-3010/pods/e2e-test-httpd-pod\",\n        \"uid\": \"24b0ead8-b4ea-4684-b9af-d19f27f3de08\"\n    },\n    \"spec\": {\n        \"containers\": [\n            {\n                \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n                \"imagePullPolicy\": \"IfNotPresent\",\n                \"name\": \"e2e-test-httpd-pod\",\n                \"resources\": {},\n                \"terminationMessagePath\": \"/dev/termination-log\",\n                \"terminationMessagePolicy\": \"File\",\n                \"volumeMounts\": [\n                    {\n                        \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n                        \"name\": \"default-token-2hxxs\",\n                        \"readOnly\": true\n                    }\n                ]\n            }\n        ],\n        \"dnsPolicy\": \"ClusterFirst\",\n        \"enableServiceLinks\": true,\n        \"nodeName\": \"jerma-node\",\n        \"priority\": 0,\n        \"restartPolicy\": \"Always\",\n        \"schedulerName\": \"default-scheduler\",\n        \"securityContext\": {},\n        \"serviceAccount\": \"default\",\n        \"serviceAccountName\": \"default\",\n        \"terminationGracePeriodSeconds\": 30,\n        \"tolerations\": [\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/not-ready\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            },\n            {\n                \"effect\": \"NoExecute\",\n                \"key\": \"node.kubernetes.io/unreachable\",\n                \"operator\": \"Exists\",\n                \"tolerationSeconds\": 300\n            }\n        ],\n        \"volumes\": [\n            {\n                \"name\": \"default-token-2hxxs\",\n                \"secret\": {\n                    \"defaultMode\": 420,\n                    \"secretName\": \"default-token-2hxxs\"\n                }\n            }\n        ]\n    },\n    \"status\": {\n        \"conditions\": [\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-17T21:33:21Z\",\n                \"status\": \"True\",\n                \"type\": \"Initialized\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-17T21:33:27Z\",\n                \"status\": \"True\",\n                \"type\": \"Ready\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-17T21:33:27Z\",\n                \"status\": \"True\",\n                \"type\": \"ContainersReady\"\n            },\n            {\n                \"lastProbeTime\": null,\n                \"lastTransitionTime\": \"2019-12-17T21:33:21Z\",\n                \"status\": \"True\",\n                \"type\": \"PodScheduled\"\n            }\n        ],\n        \"containerStatuses\": [\n            {\n                \"containerID\": \"docker://3f3d1e6c5059759e0d09e613ad02a449174a42947e1ac41e15251895e4470b01\",\n                \"image\": \"httpd:2.4.38-alpine\",\n                \"imageID\": \"docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n                \"lastState\": {},\n                \"name\": \"e2e-test-httpd-pod\",\n                \"ready\": true,\n                \"restartCount\": 0,\n                \"started\": true,\n                \"state\": {\n                    \"running\": {\n                        \"startedAt\": \"2019-12-17T21:33:27Z\"\n                    }\n                }\n            }\n        ],\n        \"hostIP\": \"10.96.2.170\",\n        \"phase\": \"Running\",\n        \"podIP\": \"10.44.0.1\",\n        \"podIPs\": [\n            {\n                \"ip\": \"10.44.0.1\"\n            }\n        ],\n        \"qosClass\": \"BestEffort\",\n        \"startTime\": \"2019-12-17T21:33:21Z\"\n    }\n}\n"
STEP: replace the image in the pod
Dec 17 21:33:33.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-3010'
Dec 17 21:33:34.041: INFO: stderr: ""
Dec 17 21:33:34.041: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n"
STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29
[AfterEach] Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1709
Dec 17 21:33:34.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-3010'
Dec 17 21:33:40.368: INFO: stderr: ""
Dec 17 21:33:40.368: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:33:40.369: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3010" for this suite.
Dec 17 21:33:48.409: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:33:48.530: INFO: namespace kubectl-3010 deletion completed in 8.15085027s

• [SLOW TEST:27.721 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl replace
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1700
    should update a single-container pod's image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:33:48.533: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Dec 17 21:33:49.205: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set
Dec 17 21:33:51.228: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712215229, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712215229, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712215229, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712215229, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 21:33:53.239: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712215229, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712215229, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712215229, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712215229, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 21:33:55.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712215229, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712215229, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712215229, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712215229, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 17 21:33:58.297: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 21:33:58.307: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: Create a v2 custom resource
STEP: List CRs in v1
STEP: List CRs in v2
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:33:59.368: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-2035" for this suite.
Dec 17 21:34:05.400: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:34:05.536: INFO: namespace crd-webhook-2035 deletion completed in 6.160650194s
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:17.021 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert a non homogeneous list of CRs [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:34:05.557: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for all rs to be garbage collected
STEP: expected 0 rs, got 1 rs
STEP: expected 0 pods, got 2 pods
STEP: Gathering metrics
W1217 21:34:08.417151       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 17 21:34:08.417: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:34:08.417: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-8864" for this suite.
Dec 17 21:34:14.988: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:34:15.104: INFO: namespace gc-8864 deletion completed in 6.643423486s

• [SLOW TEST:9.547 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete RS created by deployment when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:34:15.105: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ConfigMap
STEP: Ensuring resource quota status captures configMap creation
STEP: Deleting a ConfigMap
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:34:31.366: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-1760" for this suite.
Dec 17 21:34:37.406: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:34:37.567: INFO: namespace resourcequota-1760 deletion completed in 6.189579477s

• [SLOW TEST:22.463 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a configMap. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:34:37.569: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 21:34:37.700: INFO: Creating daemon "daemon-set" with a node selector
STEP: Initially, daemon pods should not be running on any nodes.
Dec 17 21:34:37.717: INFO: Number of nodes with available pods: 0
Dec 17 21:34:37.717: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Change node label to blue, check that daemon pod is launched.
Dec 17 21:34:37.857: INFO: Number of nodes with available pods: 0
Dec 17 21:34:37.857: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:38.866: INFO: Number of nodes with available pods: 0
Dec 17 21:34:38.866: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:39.870: INFO: Number of nodes with available pods: 0
Dec 17 21:34:39.870: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:40.903: INFO: Number of nodes with available pods: 0
Dec 17 21:34:40.903: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:43.134: INFO: Number of nodes with available pods: 0
Dec 17 21:34:43.134: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:43.870: INFO: Number of nodes with available pods: 0
Dec 17 21:34:43.870: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:44.872: INFO: Number of nodes with available pods: 0
Dec 17 21:34:44.872: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:45.883: INFO: Number of nodes with available pods: 1
Dec 17 21:34:45.883: INFO: Number of running nodes: 1, number of available pods: 1
STEP: Update the node label to green, and wait for daemons to be unscheduled
Dec 17 21:34:45.975: INFO: Number of nodes with available pods: 1
Dec 17 21:34:45.975: INFO: Number of running nodes: 0, number of available pods: 1
Dec 17 21:34:46.987: INFO: Number of nodes with available pods: 0
Dec 17 21:34:46.987: INFO: Number of running nodes: 0, number of available pods: 0
STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate
Dec 17 21:34:47.021: INFO: Number of nodes with available pods: 0
Dec 17 21:34:47.021: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:48.033: INFO: Number of nodes with available pods: 0
Dec 17 21:34:48.033: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:49.028: INFO: Number of nodes with available pods: 0
Dec 17 21:34:49.028: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:50.032: INFO: Number of nodes with available pods: 0
Dec 17 21:34:50.033: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:51.028: INFO: Number of nodes with available pods: 0
Dec 17 21:34:51.028: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:52.029: INFO: Number of nodes with available pods: 0
Dec 17 21:34:52.029: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:53.032: INFO: Number of nodes with available pods: 0
Dec 17 21:34:53.032: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:54.030: INFO: Number of nodes with available pods: 0
Dec 17 21:34:54.030: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:55.029: INFO: Number of nodes with available pods: 0
Dec 17 21:34:55.029: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:56.031: INFO: Number of nodes with available pods: 0
Dec 17 21:34:56.031: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:57.030: INFO: Number of nodes with available pods: 0
Dec 17 21:34:57.030: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:58.031: INFO: Number of nodes with available pods: 0
Dec 17 21:34:58.031: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:34:59.029: INFO: Number of nodes with available pods: 0
Dec 17 21:34:59.029: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:35:00.028: INFO: Number of nodes with available pods: 0
Dec 17 21:35:00.029: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:35:01.032: INFO: Number of nodes with available pods: 0
Dec 17 21:35:01.033: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:35:02.034: INFO: Number of nodes with available pods: 0
Dec 17 21:35:02.035: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:35:03.035: INFO: Number of nodes with available pods: 0
Dec 17 21:35:03.036: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:35:04.030: INFO: Number of nodes with available pods: 0
Dec 17 21:35:04.030: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:35:05.036: INFO: Number of nodes with available pods: 0
Dec 17 21:35:05.036: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:35:06.031: INFO: Number of nodes with available pods: 1
Dec 17 21:35:06.031: INFO: Number of running nodes: 1, number of available pods: 1
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-4391, will wait for the garbage collector to delete the pods
Dec 17 21:35:06.111: INFO: Deleting DaemonSet.extensions daemon-set took: 17.541496ms
Dec 17 21:35:06.412: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.768984ms
Dec 17 21:35:13.723: INFO: Number of nodes with available pods: 0
Dec 17 21:35:13.723: INFO: Number of running nodes: 0, number of available pods: 0
Dec 17 21:35:13.730: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-4391/daemonsets","resourceVersion":"9140133"},"items":null}

Dec 17 21:35:13.733: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-4391/pods","resourceVersion":"9140133"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:35:13.880: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-4391" for this suite.
Dec 17 21:35:19.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:35:20.056: INFO: namespace daemonsets-4391 deletion completed in 6.167377362s

• [SLOW TEST:42.488 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop complex daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[sig-storage] Downward API volume 
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:35:20.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 21:35:20.142: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f95d7d04-f55a-4b7d-98ef-4d62466f87ef" in namespace "downward-api-5429" to be "success or failure"
Dec 17 21:35:20.219: INFO: Pod "downwardapi-volume-f95d7d04-f55a-4b7d-98ef-4d62466f87ef": Phase="Pending", Reason="", readiness=false. Elapsed: 77.030581ms
Dec 17 21:35:22.233: INFO: Pod "downwardapi-volume-f95d7d04-f55a-4b7d-98ef-4d62466f87ef": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091578543s
Dec 17 21:35:24.241: INFO: Pod "downwardapi-volume-f95d7d04-f55a-4b7d-98ef-4d62466f87ef": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099455853s
Dec 17 21:35:26.247: INFO: Pod "downwardapi-volume-f95d7d04-f55a-4b7d-98ef-4d62466f87ef": Phase="Pending", Reason="", readiness=false. Elapsed: 6.104796618s
Dec 17 21:35:28.256: INFO: Pod "downwardapi-volume-f95d7d04-f55a-4b7d-98ef-4d62466f87ef": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.114080621s
STEP: Saw pod success
Dec 17 21:35:28.256: INFO: Pod "downwardapi-volume-f95d7d04-f55a-4b7d-98ef-4d62466f87ef" satisfied condition "success or failure"
Dec 17 21:35:28.263: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-f95d7d04-f55a-4b7d-98ef-4d62466f87ef container client-container: 
STEP: delete the pod
Dec 17 21:35:28.467: INFO: Waiting for pod downwardapi-volume-f95d7d04-f55a-4b7d-98ef-4d62466f87ef to disappear
Dec 17 21:35:28.487: INFO: Pod downwardapi-volume-f95d7d04-f55a-4b7d-98ef-4d62466f87ef no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:35:28.487: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5429" for this suite.
Dec 17 21:35:34.529: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:35:34.677: INFO: namespace downward-api-5429 deletion completed in 6.180301769s

• [SLOW TEST:14.620 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:35:34.678: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 21:35:35.180: INFO: Creating simple daemon set daemon-set
STEP: Check that daemon pods launch on every node of the cluster.
Dec 17 21:35:35.265: INFO: Number of nodes with available pods: 0
Dec 17 21:35:35.265: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:35:36.305: INFO: Number of nodes with available pods: 0
Dec 17 21:35:36.306: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:35:37.284: INFO: Number of nodes with available pods: 0
Dec 17 21:35:37.284: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:35:38.523: INFO: Number of nodes with available pods: 0
Dec 17 21:35:38.523: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:35:39.273: INFO: Number of nodes with available pods: 0
Dec 17 21:35:39.273: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:35:40.315: INFO: Number of nodes with available pods: 0
Dec 17 21:35:40.315: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:35:42.636: INFO: Number of nodes with available pods: 0
Dec 17 21:35:42.637: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:35:43.609: INFO: Number of nodes with available pods: 1
Dec 17 21:35:43.609: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 21:35:44.286: INFO: Number of nodes with available pods: 1
Dec 17 21:35:44.286: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 21:35:45.283: INFO: Number of nodes with available pods: 1
Dec 17 21:35:45.283: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 21:35:46.280: INFO: Number of nodes with available pods: 2
Dec 17 21:35:46.280: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Update daemon pods image.
STEP: Check that daemon pods images are updated.
Dec 17 21:35:46.361: INFO: Wrong image for pod: daemon-set-gnv96. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:46.361: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:47.438: INFO: Wrong image for pod: daemon-set-gnv96. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:47.438: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:48.432: INFO: Wrong image for pod: daemon-set-gnv96. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:48.432: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:49.437: INFO: Wrong image for pod: daemon-set-gnv96. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:49.437: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:50.436: INFO: Wrong image for pod: daemon-set-gnv96. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:50.436: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:51.434: INFO: Wrong image for pod: daemon-set-gnv96. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:51.434: INFO: Pod daemon-set-gnv96 is not available
Dec 17 21:35:51.434: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:52.443: INFO: Wrong image for pod: daemon-set-gnv96. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:52.443: INFO: Pod daemon-set-gnv96 is not available
Dec 17 21:35:52.443: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:53.437: INFO: Wrong image for pod: daemon-set-gnv96. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:53.437: INFO: Pod daemon-set-gnv96 is not available
Dec 17 21:35:53.437: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:54.439: INFO: Wrong image for pod: daemon-set-gnv96. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:54.439: INFO: Pod daemon-set-gnv96 is not available
Dec 17 21:35:54.439: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:55.454: INFO: Wrong image for pod: daemon-set-gnv96. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:55.455: INFO: Pod daemon-set-gnv96 is not available
Dec 17 21:35:55.455: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:56.432: INFO: Wrong image for pod: daemon-set-gnv96. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:56.433: INFO: Pod daemon-set-gnv96 is not available
Dec 17 21:35:56.433: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:57.439: INFO: Pod daemon-set-pxnjb is not available
Dec 17 21:35:57.439: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:58.440: INFO: Pod daemon-set-pxnjb is not available
Dec 17 21:35:58.440: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:35:59.437: INFO: Pod daemon-set-pxnjb is not available
Dec 17 21:35:59.437: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:36:01.271: INFO: Pod daemon-set-pxnjb is not available
Dec 17 21:36:01.271: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:36:02.820: INFO: Pod daemon-set-pxnjb is not available
Dec 17 21:36:02.820: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:36:04.099: INFO: Pod daemon-set-pxnjb is not available
Dec 17 21:36:04.099: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:36:05.438: INFO: Pod daemon-set-pxnjb is not available
Dec 17 21:36:05.438: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:36:06.591: INFO: Pod daemon-set-pxnjb is not available
Dec 17 21:36:06.591: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:36:07.434: INFO: Pod daemon-set-pxnjb is not available
Dec 17 21:36:07.434: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:36:08.441: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:36:09.437: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:36:10.435: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:36:11.432: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:36:11.432: INFO: Pod daemon-set-wsk8x is not available
Dec 17 21:36:12.436: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:36:12.436: INFO: Pod daemon-set-wsk8x is not available
Dec 17 21:36:13.437: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:36:13.437: INFO: Pod daemon-set-wsk8x is not available
Dec 17 21:36:14.437: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:36:14.437: INFO: Pod daemon-set-wsk8x is not available
Dec 17 21:36:15.436: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:36:15.436: INFO: Pod daemon-set-wsk8x is not available
Dec 17 21:36:16.432: INFO: Wrong image for pod: daemon-set-wsk8x. Expected: docker.io/library/redis:5.0.5-alpine, got: docker.io/library/httpd:2.4.38-alpine.
Dec 17 21:36:16.433: INFO: Pod daemon-set-wsk8x is not available
Dec 17 21:36:17.441: INFO: Pod daemon-set-9tfmb is not available
STEP: Check that daemon pods are still running on every node of the cluster.
Dec 17 21:36:17.455: INFO: Number of nodes with available pods: 1
Dec 17 21:36:17.455: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:36:18.475: INFO: Number of nodes with available pods: 1
Dec 17 21:36:18.475: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:36:19.469: INFO: Number of nodes with available pods: 1
Dec 17 21:36:19.469: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:36:20.531: INFO: Number of nodes with available pods: 1
Dec 17 21:36:20.531: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:36:21.470: INFO: Number of nodes with available pods: 1
Dec 17 21:36:21.470: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:36:22.486: INFO: Number of nodes with available pods: 1
Dec 17 21:36:22.486: INFO: Node jerma-node is running more than one daemon pod
Dec 17 21:36:23.472: INFO: Number of nodes with available pods: 2
Dec 17 21:36:23.473: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5461, will wait for the garbage collector to delete the pods
Dec 17 21:36:23.567: INFO: Deleting DaemonSet.extensions daemon-set took: 12.746181ms
Dec 17 21:36:23.869: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.777074ms
Dec 17 21:36:36.691: INFO: Number of nodes with available pods: 0
Dec 17 21:36:36.692: INFO: Number of running nodes: 0, number of available pods: 0
Dec 17 21:36:36.695: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5461/daemonsets","resourceVersion":"9140363"},"items":null}

Dec 17 21:36:36.698: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5461/pods","resourceVersion":"9140363"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:36:36.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-5461" for this suite.
Dec 17 21:36:42.737: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:36:42.845: INFO: namespace daemonsets-5461 deletion completed in 6.135276679s

• [SLOW TEST:68.167 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should update pod when spec was updated and update strategy is RollingUpdate [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:36:42.846: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
Dec 17 21:36:49.171: INFO: 0 pods remaining
Dec 17 21:36:49.171: INFO: 0 pods has nil DeletionTimestamp
Dec 17 21:36:49.171: INFO: 
STEP: Gathering metrics
W1217 21:36:51.695670       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 17 21:36:51.696: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:36:51.696: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-481" for this suite.
Dec 17 21:36:59.967: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:37:00.138: INFO: namespace gc-481 deletion completed in 8.426543496s

• [SLOW TEST:17.292 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl cluster-info 
  should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:37:00.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should check if Kubernetes master services is included in cluster-info  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: validating cluster-info
Dec 17 21:37:00.241: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info'
Dec 17 21:37:00.403: INFO: stderr: ""
Dec 17 21:37:00.403: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.24.4.186:6443\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.24.4.186:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:37:00.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-9476" for this suite.
Dec 17 21:37:06.430: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:37:06.604: INFO: namespace kubectl-9476 deletion completed in 6.194785459s

• [SLOW TEST:6.465 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl cluster-info
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:974
    should check if Kubernetes master services is included in cluster-info  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:37:06.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should scale a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating a replication controller
Dec 17 21:37:06.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6710'
Dec 17 21:37:07.205: INFO: stderr: ""
Dec 17 21:37:07.205: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 17 21:37:07.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6710'
Dec 17 21:37:07.384: INFO: stderr: ""
Dec 17 21:37:07.384: INFO: stdout: "update-demo-nautilus-kb54f update-demo-nautilus-q77w5 "
Dec 17 21:37:07.385: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kb54f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6710'
Dec 17 21:37:07.520: INFO: stderr: ""
Dec 17 21:37:07.520: INFO: stdout: ""
Dec 17 21:37:07.520: INFO: update-demo-nautilus-kb54f is created but not running
Dec 17 21:37:12.522: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6710'
Dec 17 21:37:12.651: INFO: stderr: ""
Dec 17 21:37:12.651: INFO: stdout: "update-demo-nautilus-kb54f update-demo-nautilus-q77w5 "
Dec 17 21:37:12.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kb54f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6710'
Dec 17 21:37:12.745: INFO: stderr: ""
Dec 17 21:37:12.745: INFO: stdout: ""
Dec 17 21:37:12.745: INFO: update-demo-nautilus-kb54f is created but not running
Dec 17 21:37:17.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6710'
Dec 17 21:37:18.028: INFO: stderr: ""
Dec 17 21:37:18.028: INFO: stdout: "update-demo-nautilus-kb54f update-demo-nautilus-q77w5 "
Dec 17 21:37:18.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kb54f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6710'
Dec 17 21:37:18.202: INFO: stderr: ""
Dec 17 21:37:18.202: INFO: stdout: ""
Dec 17 21:37:18.202: INFO: update-demo-nautilus-kb54f is created but not running
Dec 17 21:37:23.203: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6710'
Dec 17 21:37:23.383: INFO: stderr: ""
Dec 17 21:37:23.383: INFO: stdout: "update-demo-nautilus-kb54f update-demo-nautilus-q77w5 "
Dec 17 21:37:23.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kb54f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6710'
Dec 17 21:37:23.523: INFO: stderr: ""
Dec 17 21:37:23.523: INFO: stdout: "true"
Dec 17 21:37:23.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kb54f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6710'
Dec 17 21:37:23.645: INFO: stderr: ""
Dec 17 21:37:23.646: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 17 21:37:23.646: INFO: validating pod update-demo-nautilus-kb54f
Dec 17 21:37:23.695: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 17 21:37:23.695: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 17 21:37:23.695: INFO: update-demo-nautilus-kb54f is verified up and running
Dec 17 21:37:23.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q77w5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6710'
Dec 17 21:37:23.810: INFO: stderr: ""
Dec 17 21:37:23.810: INFO: stdout: "true"
Dec 17 21:37:23.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q77w5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6710'
Dec 17 21:37:23.958: INFO: stderr: ""
Dec 17 21:37:23.958: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 17 21:37:23.958: INFO: validating pod update-demo-nautilus-q77w5
Dec 17 21:37:23.977: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 17 21:37:23.977: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 17 21:37:23.977: INFO: update-demo-nautilus-q77w5 is verified up and running
STEP: scaling down the replication controller
Dec 17 21:37:23.981: INFO: scanned /root for discovery docs: 
Dec 17 21:37:23.981: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-6710'
Dec 17 21:37:25.252: INFO: stderr: ""
Dec 17 21:37:25.253: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 17 21:37:25.254: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6710'
Dec 17 21:37:25.461: INFO: stderr: ""
Dec 17 21:37:25.461: INFO: stdout: "update-demo-nautilus-kb54f update-demo-nautilus-q77w5 "
STEP: Replicas for name=update-demo: expected=1 actual=2
Dec 17 21:37:30.462: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6710'
Dec 17 21:37:30.622: INFO: stderr: ""
Dec 17 21:37:30.622: INFO: stdout: "update-demo-nautilus-q77w5 "
Dec 17 21:37:30.622: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q77w5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6710'
Dec 17 21:37:30.717: INFO: stderr: ""
Dec 17 21:37:30.718: INFO: stdout: "true"
Dec 17 21:37:30.718: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q77w5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6710'
Dec 17 21:37:30.843: INFO: stderr: ""
Dec 17 21:37:30.844: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 17 21:37:30.844: INFO: validating pod update-demo-nautilus-q77w5
Dec 17 21:37:30.854: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 17 21:37:30.854: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 17 21:37:30.855: INFO: update-demo-nautilus-q77w5 is verified up and running
STEP: scaling up the replication controller
Dec 17 21:37:30.860: INFO: scanned /root for discovery docs: 
Dec 17 21:37:30.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-6710'
Dec 17 21:37:32.428: INFO: stderr: ""
Dec 17 21:37:32.428: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 17 21:37:32.429: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6710'
Dec 17 21:37:32.590: INFO: stderr: ""
Dec 17 21:37:32.590: INFO: stdout: "update-demo-nautilus-lc6f9 update-demo-nautilus-q77w5 "
Dec 17 21:37:32.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lc6f9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6710'
Dec 17 21:37:32.753: INFO: stderr: ""
Dec 17 21:37:32.753: INFO: stdout: ""
Dec 17 21:37:32.753: INFO: update-demo-nautilus-lc6f9 is created but not running
Dec 17 21:37:37.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6710'
Dec 17 21:37:37.991: INFO: stderr: ""
Dec 17 21:37:37.991: INFO: stdout: "update-demo-nautilus-lc6f9 update-demo-nautilus-q77w5 "
Dec 17 21:37:37.991: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lc6f9 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6710'
Dec 17 21:37:38.122: INFO: stderr: ""
Dec 17 21:37:38.123: INFO: stdout: "true"
Dec 17 21:37:38.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lc6f9 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6710'
Dec 17 21:37:38.201: INFO: stderr: ""
Dec 17 21:37:38.202: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 17 21:37:38.202: INFO: validating pod update-demo-nautilus-lc6f9
Dec 17 21:37:38.206: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 17 21:37:38.206: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 17 21:37:38.206: INFO: update-demo-nautilus-lc6f9 is verified up and running
Dec 17 21:37:38.206: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q77w5 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6710'
Dec 17 21:37:38.290: INFO: stderr: ""
Dec 17 21:37:38.290: INFO: stdout: "true"
Dec 17 21:37:38.290: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-q77w5 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6710'
Dec 17 21:37:38.427: INFO: stderr: ""
Dec 17 21:37:38.427: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 17 21:37:38.427: INFO: validating pod update-demo-nautilus-q77w5
Dec 17 21:37:38.431: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 17 21:37:38.431: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 17 21:37:38.431: INFO: update-demo-nautilus-q77w5 is verified up and running
STEP: using delete to clean up resources
Dec 17 21:37:38.431: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-6710'
Dec 17 21:37:38.548: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 21:37:38.549: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 17 21:37:38.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-6710'
Dec 17 21:37:38.661: INFO: stderr: "No resources found in kubectl-6710 namespace.\n"
Dec 17 21:37:38.661: INFO: stdout: ""
Dec 17 21:37:38.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-6710 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 17 21:37:38.925: INFO: stderr: ""
Dec 17 21:37:38.926: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:37:38.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6710" for this suite.
Dec 17 21:37:50.960: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:37:51.110: INFO: namespace kubectl-6710 deletion completed in 12.174685478s

• [SLOW TEST:44.505 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:275
    should scale a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:37:51.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 21:37:51.482: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a6e9dc88-25c3-4414-9f5f-b3bcfa4b6d34" in namespace "projected-5210" to be "success or failure"
Dec 17 21:37:51.489: INFO: Pod "downwardapi-volume-a6e9dc88-25c3-4414-9f5f-b3bcfa4b6d34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.933722ms
Dec 17 21:37:53.506: INFO: Pod "downwardapi-volume-a6e9dc88-25c3-4414-9f5f-b3bcfa4b6d34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023246437s
Dec 17 21:37:55.558: INFO: Pod "downwardapi-volume-a6e9dc88-25c3-4414-9f5f-b3bcfa4b6d34": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075117635s
Dec 17 21:37:57.569: INFO: Pod "downwardapi-volume-a6e9dc88-25c3-4414-9f5f-b3bcfa4b6d34": Phase="Pending", Reason="", readiness=false. Elapsed: 6.086300686s
Dec 17 21:37:59.576: INFO: Pod "downwardapi-volume-a6e9dc88-25c3-4414-9f5f-b3bcfa4b6d34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093471228s
STEP: Saw pod success
Dec 17 21:37:59.576: INFO: Pod "downwardapi-volume-a6e9dc88-25c3-4414-9f5f-b3bcfa4b6d34" satisfied condition "success or failure"
Dec 17 21:37:59.580: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a6e9dc88-25c3-4414-9f5f-b3bcfa4b6d34 container client-container: 
STEP: delete the pod
Dec 17 21:37:59.630: INFO: Waiting for pod downwardapi-volume-a6e9dc88-25c3-4414-9f5f-b3bcfa4b6d34 to disappear
Dec 17 21:37:59.707: INFO: Pod downwardapi-volume-a6e9dc88-25c3-4414-9f5f-b3bcfa4b6d34 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:37:59.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5210" for this suite.
Dec 17 21:38:05.734: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:38:05.881: INFO: namespace projected-5210 deletion completed in 6.168257753s

• [SLOW TEST:14.770 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:38:05.881: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: modifying the configmap a second time
STEP: deleting the configmap
STEP: creating a watch on configmaps from the resource version returned by the first update
STEP: Expecting to observe notifications for all changes to the configmap after the first update
Dec 17 21:38:06.086: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4276 /api/v1/namespaces/watch-4276/configmaps/e2e-watch-test-resource-version b0a3a366-3500-456b-b7c1-02278462bd9b 9140746 0 2019-12-17 21:38:05 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 17 21:38:06.086: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version  watch-4276 /api/v1/namespaces/watch-4276/configmaps/e2e-watch-test-resource-version b0a3a366-3500-456b-b7c1-02278462bd9b 9140747 0 2019-12-17 21:38:05 +0000 UTC   map[watch-this-configmap:from-resource-version] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:38:06.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-4276" for this suite.
Dec 17 21:38:12.110: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:38:12.233: INFO: namespace watch-4276 deletion completed in 6.142001464s

• [SLOW TEST:6.352 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to start watching from a specific resource version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:38:12.234: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
[It] should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating a service nodeport-service with the type=NodePort in namespace services-4101
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-4101
STEP: creating replication controller externalsvc in namespace services-4101
I1217 21:38:12.438520       8 runners.go:184] Created replication controller with name: externalsvc, namespace: services-4101, replica count: 2
I1217 21:38:15.491698       8 runners.go:184] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 21:38:18.492966       8 runners.go:184] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 21:38:21.493954       8 runners.go:184] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 21:38:24.495199       8 runners.go:184] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the NodePort service to type=ExternalName
Dec 17 21:38:24.670: INFO: Creating new exec pod
Dec 17 21:38:32.761: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4101 execpodxshsm -- /bin/sh -x -c nslookup nodeport-service'
Dec 17 21:38:33.286: INFO: stderr: "+ nslookup nodeport-service\n"
Dec 17 21:38:33.286: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-4101.svc.cluster.local\tcanonical name = externalsvc.services-4101.svc.cluster.local.\nName:\texternalsvc.services-4101.svc.cluster.local\nAddress: 10.108.241.114\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-4101, will wait for the garbage collector to delete the pods
Dec 17 21:38:33.375: INFO: Deleting ReplicationController externalsvc took: 10.765702ms
Dec 17 21:38:33.675: INFO: Terminating ReplicationController externalsvc pods took: 300.554283ms
Dec 17 21:38:46.969: INFO: Cleaning up the NodePort to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:38:47.013: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-4101" for this suite.
Dec 17 21:38:53.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:38:53.139: INFO: namespace services-4101 deletion completed in 6.117213997s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95

• [SLOW TEST:40.905 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from NodePort to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:38:53.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Given a Pod with a 'name' label pod-adoption-release is created
STEP: When a replicaset with a matching selector is created
STEP: Then the orphan pod is adopted
STEP: When the matched label of one of its pods change
Dec 17 21:39:06.541: INFO: Pod name pod-adoption-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:39:07.620: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-7667" for this suite.
Dec 17 21:39:33.661: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:39:33.911: INFO: namespace replicaset-7667 deletion completed in 26.274538677s

• [SLOW TEST:40.772 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation and release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-node] Downward API 
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:39:33.912: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward api env vars
Dec 17 21:39:34.070: INFO: Waiting up to 5m0s for pod "downward-api-8aa33daa-14fb-4265-88a1-5641d0dafc1c" in namespace "downward-api-8512" to be "success or failure"
Dec 17 21:39:34.126: INFO: Pod "downward-api-8aa33daa-14fb-4265-88a1-5641d0dafc1c": Phase="Pending", Reason="", readiness=false. Elapsed: 55.549555ms
Dec 17 21:39:36.141: INFO: Pod "downward-api-8aa33daa-14fb-4265-88a1-5641d0dafc1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070712769s
Dec 17 21:39:38.150: INFO: Pod "downward-api-8aa33daa-14fb-4265-88a1-5641d0dafc1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07941217s
Dec 17 21:39:40.158: INFO: Pod "downward-api-8aa33daa-14fb-4265-88a1-5641d0dafc1c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.08734479s
Dec 17 21:39:42.169: INFO: Pod "downward-api-8aa33daa-14fb-4265-88a1-5641d0dafc1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.098184107s
STEP: Saw pod success
Dec 17 21:39:42.169: INFO: Pod "downward-api-8aa33daa-14fb-4265-88a1-5641d0dafc1c" satisfied condition "success or failure"
Dec 17 21:39:42.173: INFO: Trying to get logs from node jerma-node pod downward-api-8aa33daa-14fb-4265-88a1-5641d0dafc1c container dapi-container: 
STEP: delete the pod
Dec 17 21:39:42.407: INFO: Waiting for pod downward-api-8aa33daa-14fb-4265-88a1-5641d0dafc1c to disappear
Dec 17 21:39:42.420: INFO: Pod downward-api-8aa33daa-14fb-4265-88a1-5641d0dafc1c no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:39:42.420: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8512" for this suite.
Dec 17 21:39:48.456: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:39:48.606: INFO: namespace downward-api-8512 deletion completed in 6.174966822s

• [SLOW TEST:14.694 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:39:48.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:39:55.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-6558" for this suite.
Dec 17 21:40:01.780: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:40:02.022: INFO: namespace resourcequota-6558 deletion completed in 6.279299025s

• [SLOW TEST:13.416 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:40:02.024: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77
STEP: Creating service test in namespace statefulset-4032
[It] should perform rolling updates and roll backs of template modifications [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a new StatefulSet
Dec 17 21:40:02.263: INFO: Found 0 stateful pods, waiting for 3
Dec 17 21:40:12.276: INFO: Found 2 stateful pods, waiting for 3
Dec 17 21:40:22.280: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 21:40:22.280: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 21:40:22.280: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 17 21:40:32.271: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 21:40:32.271: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 21:40:32.271: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 21:40:32.295: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4032 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 17 21:40:32.813: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Dec 17 21:40:32.813: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 17 21:40:32.813: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine
Dec 17 21:40:42.918: INFO: Updating stateful set ss2
STEP: Creating a new revision
STEP: Updating Pods in reverse ordinal order
Dec 17 21:40:52.971: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4032 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 21:40:53.250: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Dec 17 21:40:53.250: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 17 21:40:53.251: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 17 21:41:03.317: INFO: Waiting for StatefulSet statefulset-4032/ss2 to complete update
Dec 17 21:41:03.317: INFO: Waiting for Pod statefulset-4032/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 17 21:41:03.317: INFO: Waiting for Pod statefulset-4032/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 17 21:41:03.317: INFO: Waiting for Pod statefulset-4032/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 17 21:41:13.336: INFO: Waiting for StatefulSet statefulset-4032/ss2 to complete update
Dec 17 21:41:13.336: INFO: Waiting for Pod statefulset-4032/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 17 21:41:13.336: INFO: Waiting for Pod statefulset-4032/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 17 21:41:23.339: INFO: Waiting for StatefulSet statefulset-4032/ss2 to complete update
Dec 17 21:41:23.340: INFO: Waiting for Pod statefulset-4032/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 17 21:41:23.340: INFO: Waiting for Pod statefulset-4032/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 17 21:41:33.334: INFO: Waiting for StatefulSet statefulset-4032/ss2 to complete update
Dec 17 21:41:33.334: INFO: Waiting for Pod statefulset-4032/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94
Dec 17 21:41:43.333: INFO: Waiting for StatefulSet statefulset-4032/ss2 to complete update
STEP: Rolling back to a previous revision
Dec 17 21:41:53.343: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4032 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 17 21:41:53.872: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Dec 17 21:41:53.873: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 17 21:41:53.873: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 17 21:42:05.463: INFO: Updating stateful set ss2
STEP: Rolling back update in reverse ordinal order
Dec 17 21:42:15.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-4032 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 21:42:15.948: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Dec 17 21:42:15.949: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 17 21:42:15.949: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 17 21:42:25.981: INFO: Waiting for StatefulSet statefulset-4032/ss2 to complete update
Dec 17 21:42:25.981: INFO: Waiting for Pod statefulset-4032/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Dec 17 21:42:25.981: INFO: Waiting for Pod statefulset-4032/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Dec 17 21:42:25.981: INFO: Waiting for Pod statefulset-4032/ss2-2 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Dec 17 21:42:36.531: INFO: Waiting for StatefulSet statefulset-4032/ss2 to complete update
Dec 17 21:42:36.532: INFO: Waiting for Pod statefulset-4032/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Dec 17 21:42:36.532: INFO: Waiting for Pod statefulset-4032/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Dec 17 21:42:45.992: INFO: Waiting for StatefulSet statefulset-4032/ss2 to complete update
Dec 17 21:42:45.993: INFO: Waiting for Pod statefulset-4032/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Dec 17 21:42:45.993: INFO: Waiting for Pod statefulset-4032/ss2-1 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Dec 17 21:42:56.001: INFO: Waiting for StatefulSet statefulset-4032/ss2 to complete update
Dec 17 21:42:56.002: INFO: Waiting for Pod statefulset-4032/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Dec 17 21:43:05.996: INFO: Waiting for StatefulSet statefulset-4032/ss2 to complete update
Dec 17 21:43:05.996: INFO: Waiting for Pod statefulset-4032/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57
Dec 17 21:43:16.009: INFO: Waiting for StatefulSet statefulset-4032/ss2 to complete update
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
Dec 17 21:43:26.023: INFO: Deleting all statefulset in ns statefulset-4032
Dec 17 21:43:26.028: INFO: Scaling statefulset ss2 to 0
Dec 17 21:44:06.055: INFO: Waiting for statefulset status.replicas updated to 0
Dec 17 21:44:06.066: INFO: Deleting statefulset ss2
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:44:06.092: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-4032" for this suite.
Dec 17 21:44:14.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:44:14.274: INFO: namespace statefulset-4032 deletion completed in 8.173124376s

• [SLOW TEST:252.250 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
    should perform rolling updates and roll backs of template modifications [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:44:14.274: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating 50 configmaps
STEP: Creating RC which spawns configmap-volume pods
Dec 17 21:44:15.002: INFO: Pod name wrapped-volume-race-0d7d39ce-1cf7-439c-b610-653402220679: Found 0 pods out of 5
Dec 17 21:44:20.054: INFO: Pod name wrapped-volume-race-0d7d39ce-1cf7-439c-b610-653402220679: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-0d7d39ce-1cf7-439c-b610-653402220679 in namespace emptydir-wrapper-4479, will wait for the garbage collector to delete the pods
Dec 17 21:44:54.188: INFO: Deleting ReplicationController wrapped-volume-race-0d7d39ce-1cf7-439c-b610-653402220679 took: 17.099004ms
Dec 17 21:44:54.489: INFO: Terminating ReplicationController wrapped-volume-race-0d7d39ce-1cf7-439c-b610-653402220679 pods took: 300.648154ms
STEP: Creating RC which spawns configmap-volume pods
Dec 17 21:45:47.373: INFO: Pod name wrapped-volume-race-8cface5d-2619-4e3e-a234-8df98b47757f: Found 0 pods out of 5
Dec 17 21:45:52.400: INFO: Pod name wrapped-volume-race-8cface5d-2619-4e3e-a234-8df98b47757f: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-8cface5d-2619-4e3e-a234-8df98b47757f in namespace emptydir-wrapper-4479, will wait for the garbage collector to delete the pods
Dec 17 21:46:22.539: INFO: Deleting ReplicationController wrapped-volume-race-8cface5d-2619-4e3e-a234-8df98b47757f took: 32.665682ms
Dec 17 21:46:22.840: INFO: Terminating ReplicationController wrapped-volume-race-8cface5d-2619-4e3e-a234-8df98b47757f pods took: 301.565187ms
STEP: Creating RC which spawns configmap-volume pods
Dec 17 21:47:06.983: INFO: Pod name wrapped-volume-race-bf2c1805-dfd3-4160-b31a-a0a31dc4433a: Found 0 pods out of 5
Dec 17 21:47:12.002: INFO: Pod name wrapped-volume-race-bf2c1805-dfd3-4160-b31a-a0a31dc4433a: Found 5 pods out of 5
STEP: Ensuring each pod is running
STEP: deleting ReplicationController wrapped-volume-race-bf2c1805-dfd3-4160-b31a-a0a31dc4433a in namespace emptydir-wrapper-4479, will wait for the garbage collector to delete the pods
Dec 17 21:47:44.154: INFO: Deleting ReplicationController wrapped-volume-race-bf2c1805-dfd3-4160-b31a-a0a31dc4433a took: 44.718961ms
Dec 17 21:47:44.555: INFO: Terminating ReplicationController wrapped-volume-race-bf2c1805-dfd3-4160-b31a-a0a31dc4433a pods took: 400.862836ms
STEP: Cleaning up the configMaps
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:48:28.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-4479" for this suite.
Dec 17 21:48:40.255: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:48:40.408: INFO: namespace emptydir-wrapper-4479 deletion completed in 12.173427283s

• [SLOW TEST:266.134 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not cause race condition when used for configmaps [Serial] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:48:40.409: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the deployment
STEP: Wait for the Deployment to create new ReplicaSet
STEP: delete the deployment
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs
STEP: Gathering metrics
W1217 21:49:10.614066       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 17 21:49:10.614: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:49:10.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-7387" for this suite.
Dec 17 21:49:19.260: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:49:20.696: INFO: namespace gc-7387 deletion completed in 10.073186239s

• [SLOW TEST:40.288 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:49:20.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
Dec 17 21:49:20.924: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 17 21:49:20.942: INFO: Waiting for terminating namespaces to be deleted...
Dec 17 21:49:20.944: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Dec 17 21:49:20.966: INFO: weave-net-srfjj from kube-system started at 2019-12-17 21:23:16 +0000 UTC (2 container statuses recorded)
Dec 17 21:49:20.966: INFO: 	Container weave ready: true, restart count 0
Dec 17 21:49:20.966: INFO: 	Container weave-npc ready: true, restart count 0
Dec 17 21:49:20.966: INFO: kube-proxy-jcjl4 from kube-system started at 2019-10-12 13:47:49 +0000 UTC (1 container statuses recorded)
Dec 17 21:49:20.967: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 17 21:49:20.967: INFO: 
Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit before test
Dec 17 21:49:21.006: INFO: coredns-5644d7b6d9-9sj58 from kube-system started at 2019-12-14 15:12:12 +0000 UTC (1 container statuses recorded)
Dec 17 21:49:21.006: INFO: 	Container coredns ready: true, restart count 0
Dec 17 21:49:21.006: INFO: kube-scheduler-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:42 +0000 UTC (1 container statuses recorded)
Dec 17 21:49:21.007: INFO: 	Container kube-scheduler ready: true, restart count 11
Dec 17 21:49:21.007: INFO: kube-proxy-bdcvr from kube-system started at 2019-12-13 09:08:20 +0000 UTC (1 container statuses recorded)
Dec 17 21:49:21.007: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 17 21:49:21.007: INFO: coredns-5644d7b6d9-xvlxj from kube-system started at 2019-12-14 16:49:52 +0000 UTC (1 container statuses recorded)
Dec 17 21:49:21.007: INFO: 	Container coredns ready: true, restart count 0
Dec 17 21:49:21.007: INFO: etcd-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:37 +0000 UTC (1 container statuses recorded)
Dec 17 21:49:21.007: INFO: 	Container etcd ready: true, restart count 1
Dec 17 21:49:21.007: INFO: kube-controller-manager-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:40 +0000 UTC (1 container statuses recorded)
Dec 17 21:49:21.007: INFO: 	Container kube-controller-manager ready: true, restart count 8
Dec 17 21:49:21.007: INFO: kube-apiserver-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:38 +0000 UTC (1 container statuses recorded)
Dec 17 21:49:21.007: INFO: 	Container kube-apiserver ready: true, restart count 1
Dec 17 21:49:21.007: INFO: coredns-5644d7b6d9-n9kkw from kube-system started at 2019-11-10 16:39:08 +0000 UTC (0 container statuses recorded)
Dec 17 21:49:21.007: INFO: coredns-5644d7b6d9-rqwzj from kube-system started at 2019-11-10 18:03:38 +0000 UTC (0 container statuses recorded)
Dec 17 21:49:21.007: INFO: weave-net-gsjjk from kube-system started at 2019-12-13 09:16:56 +0000 UTC (2 container statuses recorded)
Dec 17 21:49:21.007: INFO: 	Container weave ready: true, restart count 0
Dec 17 21:49:21.007: INFO: 	Container weave-npc ready: true, restart count 0
[It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-0079bd0a-a28b-44e1-a0e8-6e2c9f310eb8 90
STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled
STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled
STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides
STEP: removing the label kubernetes.io/e2e-0079bd0a-a28b-44e1-a0e8-6e2c9f310eb8 off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-0079bd0a-a28b-44e1-a0e8-6e2c9f310eb8
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:49:55.473: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-9507" for this suite.
Dec 17 21:50:25.504: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:50:25.633: INFO: namespace sched-pred-9507 deletion completed in 30.154186243s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78

• [SLOW TEST:64.936 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:50:25.635: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name configmap-test-volume-map-7f6d5c36-0f39-4d1e-8797-03d487b4c352
STEP: Creating a pod to test consume configMaps
Dec 17 21:50:25.849: INFO: Waiting up to 5m0s for pod "pod-configmaps-ffacff51-19cd-488e-bbcd-9a9b3b36c95b" in namespace "configmap-9707" to be "success or failure"
Dec 17 21:50:25.858: INFO: Pod "pod-configmaps-ffacff51-19cd-488e-bbcd-9a9b3b36c95b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.948322ms
Dec 17 21:50:27.870: INFO: Pod "pod-configmaps-ffacff51-19cd-488e-bbcd-9a9b3b36c95b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020961942s
Dec 17 21:50:29.889: INFO: Pod "pod-configmaps-ffacff51-19cd-488e-bbcd-9a9b3b36c95b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039867711s
Dec 17 21:50:31.901: INFO: Pod "pod-configmaps-ffacff51-19cd-488e-bbcd-9a9b3b36c95b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052204777s
Dec 17 21:50:33.921: INFO: Pod "pod-configmaps-ffacff51-19cd-488e-bbcd-9a9b3b36c95b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072172022s
STEP: Saw pod success
Dec 17 21:50:33.921: INFO: Pod "pod-configmaps-ffacff51-19cd-488e-bbcd-9a9b3b36c95b" satisfied condition "success or failure"
Dec 17 21:50:33.931: INFO: Trying to get logs from node jerma-node pod pod-configmaps-ffacff51-19cd-488e-bbcd-9a9b3b36c95b container configmap-volume-test: 
STEP: delete the pod
Dec 17 21:50:33.991: INFO: Waiting for pod pod-configmaps-ffacff51-19cd-488e-bbcd-9a9b3b36c95b to disappear
Dec 17 21:50:34.163: INFO: Pod pod-configmaps-ffacff51-19cd-488e-bbcd-9a9b3b36c95b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:50:34.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9707" for this suite.
Dec 17 21:50:40.220: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:50:40.419: INFO: namespace configmap-9707 deletion completed in 6.238321686s

• [SLOW TEST:14.784 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[sig-cli] Kubectl client Kubectl run deployment 
  should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:50:40.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1540
[It] should create a deployment from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 17 21:50:40.491: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-1010'
Dec 17 21:50:42.786: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 17 21:50:42.786: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n"
STEP: verifying the deployment e2e-test-httpd-deployment was created
STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created
[AfterEach] Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1545
Dec 17 21:50:44.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-1010'
Dec 17 21:50:45.093: INFO: stderr: ""
Dec 17 21:50:45.094: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:50:45.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1010" for this suite.
Dec 17 21:50:51.167: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:50:51.282: INFO: namespace kubectl-1010 deletion completed in 6.152381274s

• [SLOW TEST:10.863 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1536
    should create a deployment from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSS
------------------------------
[sig-apps] Deployment 
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:50:51.283: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
[It] RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 21:50:51.424: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted)
Dec 17 21:50:51.439: INFO: Pod name sample-pod: Found 0 pods out of 1
Dec 17 21:50:56.450: INFO: Pod name sample-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 17 21:50:58.473: INFO: Creating deployment "test-rolling-update-deployment"
Dec 17 21:50:58.494: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has
Dec 17 21:50:58.506: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created
Dec 17 21:51:00.545: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected
Dec 17 21:51:00.561: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712216258, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712216258, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712216258, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712216258, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-55d946486\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 21:51:02.575: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712216258, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712216258, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712216258, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712216258, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-55d946486\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 21:51:04.608: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712216258, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712216258, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712216258, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712216258, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-55d946486\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 21:51:06.646: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712216258, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712216258, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712216258, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712216258, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-55d946486\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 21:51:08.570: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted)
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62
Dec 17 21:51:08.583: INFO: Deployment "test-rolling-update-deployment":
&Deployment{ObjectMeta:{test-rolling-update-deployment  deployment-7637 /apis/apps/v1/namespaces/deployment-7637/deployments/test-rolling-update-deployment b2e070a5-5257-4191-861d-0e5788a76f43 9143434 1 2019-12-17 21:50:58 +0000 UTC   map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod] map[] [] []  []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003a32f68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2019-12-17 21:50:58 +0000 UTC,LastTransitionTime:2019-12-17 21:50:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-55d946486" has successfully progressed.,LastUpdateTime:2019-12-17 21:51:07 +0000 UTC,LastTransitionTime:2019-12-17 21:50:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Dec 17 21:51:08.590: INFO: New ReplicaSet "test-rolling-update-deployment-55d946486" of Deployment "test-rolling-update-deployment":
&ReplicaSet{ObjectMeta:{test-rolling-update-deployment-55d946486  deployment-7637 /apis/apps/v1/namespaces/deployment-7637/replicasets/test-rolling-update-deployment-55d946486 beef8b19-183c-4dce-9da6-4599411cb797 9143423 1 2019-12-17 21:50:58 +0000 UTC   map[name:sample-pod pod-template-hash:55d946486] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment b2e070a5-5257-4191-861d-0e5788a76f43 0xc002ff5480 0xc002ff5481}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 55d946486,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod-template-hash:55d946486] map[] [] []  []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002ff5608  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Dec 17 21:51:08.590: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment":
Dec 17 21:51:08.590: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller  deployment-7637 /apis/apps/v1/namespaces/deployment-7637/replicasets/test-rolling-update-controller c6d85875-d3fb-495a-a7b5-52bbcd482e4e 9143433 2 2019-12-17 21:50:51 +0000 UTC   map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment b2e070a5-5257-4191-861d-0e5788a76f43 0xc002ff51a7 0xc002ff51a8}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc002ff5388  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Dec 17 21:51:08.656: INFO: Pod "test-rolling-update-deployment-55d946486-6f6pt" is available:
&Pod{ObjectMeta:{test-rolling-update-deployment-55d946486-6f6pt test-rolling-update-deployment-55d946486- deployment-7637 /api/v1/namespaces/deployment-7637/pods/test-rolling-update-deployment-55d946486-6f6pt 08053727-06fd-4a32-b9ad-f42c9abfcb7d 9143422 0 2019-12-17 21:50:58 +0000 UTC   map[name:sample-pod pod-template-hash:55d946486] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-55d946486 beef8b19-183c-4dce-9da6-4599411cb797 0xc003a33370 0xc003a33371}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ncz2l,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ncz2l,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:redis,Image:docker.io/library/redis:5.0.5-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ncz2l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 21:50:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 21:51:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 21:51:06 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 21:50:58 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.2,StartTime:2019-12-17 21:50:58 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:redis,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-17 21:51:06 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:redis:5.0.5-alpine,ImageID:docker-pullable://redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858,ContainerID:docker://e0bdebc02118d7b8820dd765da9e4a282e927a91a6501098d8ce2e7dc00b1b40,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:51:08.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-7637" for this suite.
Dec 17 21:51:16.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:51:16.877: INFO: namespace deployment-7637 deletion completed in 8.210497096s

• [SLOW TEST:25.594 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RollingUpdateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[k8s.io] Probing container 
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:51:16.877: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
[It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 21:51:47.079: INFO: Container started at 2019-12-17 21:51:22 +0000 UTC, pod became ready at 2019-12-17 21:51:45 +0000 UTC
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:51:47.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-4729" for this suite.
Dec 17 21:52:15.148: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:52:15.325: INFO: namespace container-probe-4729 deletion completed in 28.239742214s

• [SLOW TEST:58.448 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:52:15.326: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 21:52:15.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Dec 17 21:52:19.445: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5344 create -f -'
Dec 17 21:52:21.832: INFO: stderr: ""
Dec 17 21:52:21.833: INFO: stdout: "e2e-test-crd-publish-openapi-8107-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Dec 17 21:52:21.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5344 delete e2e-test-crd-publish-openapi-8107-crds test-cr'
Dec 17 21:52:21.978: INFO: stderr: ""
Dec 17 21:52:21.978: INFO: stdout: "e2e-test-crd-publish-openapi-8107-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
Dec 17 21:52:21.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5344 apply -f -'
Dec 17 21:52:22.501: INFO: stderr: ""
Dec 17 21:52:22.502: INFO: stdout: "e2e-test-crd-publish-openapi-8107-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n"
Dec 17 21:52:22.503: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5344 delete e2e-test-crd-publish-openapi-8107-crds test-cr'
Dec 17 21:52:22.615: INFO: stderr: ""
Dec 17 21:52:22.616: INFO: stdout: "e2e-test-crd-publish-openapi-8107-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Dec 17 21:52:22.616: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8107-crds'
Dec 17 21:52:23.052: INFO: stderr: ""
Dec 17 21:52:23.052: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-8107-crd\nVERSION:  crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:52:26.758: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5344" for this suite.
Dec 17 21:52:32.786: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:52:32.902: INFO: namespace crd-publish-openapi-5344 deletion completed in 6.137532204s

• [SLOW TEST:17.576 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields at the schema root [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[sig-node] ConfigMap 
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:52:32.903: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap that has name configmap-test-emptyKey-ca64c113-5661-4520-9db0-f9fb3d6058ee
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:52:33.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3401" for this suite.
Dec 17 21:52:39.089: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:52:39.205: INFO: namespace configmap-3401 deletion completed in 6.136653087s

• [SLOW TEST:6.302 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32
  should fail to create ConfigMap with empty key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:52:39.206: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] creating/deleting custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 21:52:39.289: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:52:40.373: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-4201" for this suite.
Dec 17 21:52:46.412: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:52:46.610: INFO: namespace custom-resource-definition-4201 deletion completed in 6.223445901s

• [SLOW TEST:7.405 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:42
    creating/deleting custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:52:46.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name projected-configmap-test-volume-15ba68fc-314a-473f-a424-af5c2f91aca3
STEP: Creating a pod to test consume configMaps
Dec 17 21:52:46.921: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-a447ec18-16ba-4cf5-b7d9-243dcc09df1d" in namespace "projected-5576" to be "success or failure"
Dec 17 21:52:47.006: INFO: Pod "pod-projected-configmaps-a447ec18-16ba-4cf5-b7d9-243dcc09df1d": Phase="Pending", Reason="", readiness=false. Elapsed: 84.93764ms
Dec 17 21:52:49.015: INFO: Pod "pod-projected-configmaps-a447ec18-16ba-4cf5-b7d9-243dcc09df1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093316578s
Dec 17 21:52:51.024: INFO: Pod "pod-projected-configmaps-a447ec18-16ba-4cf5-b7d9-243dcc09df1d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102616227s
Dec 17 21:52:53.033: INFO: Pod "pod-projected-configmaps-a447ec18-16ba-4cf5-b7d9-243dcc09df1d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.111872063s
Dec 17 21:52:55.127: INFO: Pod "pod-projected-configmaps-a447ec18-16ba-4cf5-b7d9-243dcc09df1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.205314531s
STEP: Saw pod success
Dec 17 21:52:55.127: INFO: Pod "pod-projected-configmaps-a447ec18-16ba-4cf5-b7d9-243dcc09df1d" satisfied condition "success or failure"
Dec 17 21:52:55.140: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-a447ec18-16ba-4cf5-b7d9-243dcc09df1d container projected-configmap-volume-test: 
STEP: delete the pod
Dec 17 21:52:55.323: INFO: Waiting for pod pod-projected-configmaps-a447ec18-16ba-4cf5-b7d9-243dcc09df1d to disappear
Dec 17 21:52:55.336: INFO: Pod pod-projected-configmaps-a447ec18-16ba-4cf5-b7d9-243dcc09df1d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:52:55.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5576" for this suite.
Dec 17 21:53:01.368: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:53:01.507: INFO: namespace projected-5576 deletion completed in 6.164363451s

• [SLOW TEST:14.893 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:53:01.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 17 21:53:10.972: INFO: Expected: &{OK} to match Container's Termination Message: OK --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:53:11.081: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-442" for this suite.
Dec 17 21:53:17.153: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:53:17.413: INFO: namespace container-runtime-442 deletion completed in 6.323860211s

• [SLOW TEST:15.905 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132
      should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[sig-auth] ServiceAccounts 
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:53:17.415: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: getting the auto-created API token
STEP: reading a file in the container
Dec 17 21:53:26.249: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3187 pod-service-account-4ceb52f2-5846-4f2b-ac27-3c4623afe54f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token'
STEP: reading a file in the container
Dec 17 21:53:26.664: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3187 pod-service-account-4ceb52f2-5846-4f2b-ac27-3c4623afe54f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt'
STEP: reading a file in the container
Dec 17 21:53:27.072: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-3187 pod-service-account-4ceb52f2-5846-4f2b-ac27-3c4623afe54f -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace'
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:53:27.465: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-3187" for this suite.
Dec 17 21:53:33.498: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:53:33.659: INFO: namespace svcaccounts-3187 deletion completed in 6.188454179s

• [SLOW TEST:16.245 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should mount an API token into pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:53:33.662: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
[It] should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 21:53:33.817: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:53:44.174: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1313" for this suite.
Dec 17 21:54:28.257: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:54:28.394: INFO: namespace pods-1313 deletion completed in 44.205920915s

• [SLOW TEST:54.732 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should support remote command execution over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[sig-network] Service endpoints latency 
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:54:28.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svc-latency
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating replication controller svc-latency-rc in namespace svc-latency-2582
I1217 21:54:28.554727       8 runners.go:184] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2582, replica count: 1
I1217 21:54:29.606306       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 21:54:30.608517       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 21:54:31.609754       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 21:54:32.611285       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 21:54:33.613553       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 21:54:34.614404       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 21:54:35.615189       8 runners.go:184] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 17 21:54:35.791: INFO: Created: latency-svc-4gfql
Dec 17 21:54:35.805: INFO: Got endpoints: latency-svc-4gfql [89.293994ms]
Dec 17 21:54:35.964: INFO: Created: latency-svc-6npsm
Dec 17 21:54:35.967: INFO: Got endpoints: latency-svc-6npsm [161.202491ms]
Dec 17 21:54:36.062: INFO: Created: latency-svc-tkpgw
Dec 17 21:54:36.067: INFO: Got endpoints: latency-svc-tkpgw [258.456436ms]
Dec 17 21:54:36.104: INFO: Created: latency-svc-n2xrj
Dec 17 21:54:36.128: INFO: Got endpoints: latency-svc-n2xrj [316.509233ms]
Dec 17 21:54:36.225: INFO: Created: latency-svc-plszm
Dec 17 21:54:36.225: INFO: Got endpoints: latency-svc-plszm [414.039835ms]
Dec 17 21:54:36.279: INFO: Created: latency-svc-26cdl
Dec 17 21:54:36.291: INFO: Got endpoints: latency-svc-26cdl [480.044476ms]
Dec 17 21:54:36.413: INFO: Created: latency-svc-s28bs
Dec 17 21:54:36.418: INFO: Got endpoints: latency-svc-s28bs [608.5198ms]
Dec 17 21:54:36.465: INFO: Created: latency-svc-m62ns
Dec 17 21:54:36.476: INFO: Got endpoints: latency-svc-m62ns [663.504971ms]
Dec 17 21:54:36.591: INFO: Created: latency-svc-pjsdp
Dec 17 21:54:36.610: INFO: Got endpoints: latency-svc-pjsdp [800.448686ms]
Dec 17 21:54:36.653: INFO: Created: latency-svc-bkhwv
Dec 17 21:54:36.671: INFO: Got endpoints: latency-svc-bkhwv [861.892468ms]
Dec 17 21:54:36.783: INFO: Created: latency-svc-hq4kc
Dec 17 21:54:36.783: INFO: Got endpoints: latency-svc-hq4kc [972.885663ms]
Dec 17 21:54:36.806: INFO: Created: latency-svc-qfwrw
Dec 17 21:54:36.849: INFO: Got endpoints: latency-svc-qfwrw [1.036298994s]
Dec 17 21:54:36.863: INFO: Created: latency-svc-c7vs4
Dec 17 21:54:37.154: INFO: Got endpoints: latency-svc-c7vs4 [1.345282657s]
Dec 17 21:54:37.157: INFO: Created: latency-svc-hjhnz
Dec 17 21:54:37.157: INFO: Got endpoints: latency-svc-hjhnz [1.344907568s]
Dec 17 21:54:37.202: INFO: Created: latency-svc-d44ck
Dec 17 21:54:37.410: INFO: Got endpoints: latency-svc-d44ck [1.598141592s]
Dec 17 21:54:37.464: INFO: Created: latency-svc-bsj2l
Dec 17 21:54:37.468: INFO: Created: latency-svc-kqmkf
Dec 17 21:54:37.483: INFO: Got endpoints: latency-svc-bsj2l [1.51540501s]
Dec 17 21:54:37.483: INFO: Got endpoints: latency-svc-kqmkf [1.672541527s]
Dec 17 21:54:37.649: INFO: Created: latency-svc-mcrc6
Dec 17 21:54:37.651: INFO: Got endpoints: latency-svc-mcrc6 [1.58319818s]
Dec 17 21:54:37.731: INFO: Created: latency-svc-n788n
Dec 17 21:54:37.800: INFO: Got endpoints: latency-svc-n788n [1.671523601s]
Dec 17 21:54:37.806: INFO: Created: latency-svc-pnxbx
Dec 17 21:54:37.812: INFO: Got endpoints: latency-svc-pnxbx [1.58749869s]
Dec 17 21:54:37.866: INFO: Created: latency-svc-kfglt
Dec 17 21:54:38.005: INFO: Got endpoints: latency-svc-kfglt [1.712860057s]
Dec 17 21:54:38.021: INFO: Created: latency-svc-78dpd
Dec 17 21:54:38.065: INFO: Got endpoints: latency-svc-78dpd [1.646541418s]
Dec 17 21:54:38.069: INFO: Created: latency-svc-gk2xh
Dec 17 21:54:38.083: INFO: Got endpoints: latency-svc-gk2xh [1.60717654s]
Dec 17 21:54:38.171: INFO: Created: latency-svc-8zhx2
Dec 17 21:54:38.183: INFO: Got endpoints: latency-svc-8zhx2 [1.571933658s]
Dec 17 21:54:38.209: INFO: Created: latency-svc-vbcsg
Dec 17 21:54:38.218: INFO: Got endpoints: latency-svc-vbcsg [1.545904463s]
Dec 17 21:54:38.333: INFO: Created: latency-svc-mpzss
Dec 17 21:54:38.340: INFO: Got endpoints: latency-svc-mpzss [157.704681ms]
Dec 17 21:54:38.377: INFO: Created: latency-svc-hzpw6
Dec 17 21:54:38.379: INFO: Got endpoints: latency-svc-hzpw6 [1.595949412s]
Dec 17 21:54:38.568: INFO: Created: latency-svc-stpc9
Dec 17 21:54:38.589: INFO: Got endpoints: latency-svc-stpc9 [1.739173384s]
Dec 17 21:54:38.621: INFO: Created: latency-svc-c62lv
Dec 17 21:54:38.655: INFO: Created: latency-svc-pwcbp
Dec 17 21:54:38.656: INFO: Got endpoints: latency-svc-c62lv [1.498235548s]
Dec 17 21:54:38.737: INFO: Got endpoints: latency-svc-pwcbp [1.58143935s]
Dec 17 21:54:38.759: INFO: Created: latency-svc-zjv58
Dec 17 21:54:38.767: INFO: Got endpoints: latency-svc-zjv58 [1.356975314s]
Dec 17 21:54:38.814: INFO: Created: latency-svc-28659
Dec 17 21:54:38.817: INFO: Got endpoints: latency-svc-28659 [1.333518972s]
Dec 17 21:54:38.989: INFO: Created: latency-svc-4xcwk
Dec 17 21:54:38.995: INFO: Got endpoints: latency-svc-4xcwk [1.512226651s]
Dec 17 21:54:39.077: INFO: Created: latency-svc-f68gs
Dec 17 21:54:39.191: INFO: Got endpoints: latency-svc-f68gs [1.539858716s]
Dec 17 21:54:39.232: INFO: Created: latency-svc-9hphz
Dec 17 21:54:39.350: INFO: Got endpoints: latency-svc-9hphz [1.550282276s]
Dec 17 21:54:39.379: INFO: Created: latency-svc-74k7k
Dec 17 21:54:39.383: INFO: Got endpoints: latency-svc-74k7k [1.570641116s]
Dec 17 21:54:39.520: INFO: Created: latency-svc-mx8wj
Dec 17 21:54:39.530: INFO: Got endpoints: latency-svc-mx8wj [1.524921948s]
Dec 17 21:54:39.591: INFO: Created: latency-svc-gxx4g
Dec 17 21:54:39.595: INFO: Got endpoints: latency-svc-gxx4g [1.530529927s]
Dec 17 21:54:39.743: INFO: Created: latency-svc-6gnwd
Dec 17 21:54:39.794: INFO: Created: latency-svc-5nl2h
Dec 17 21:54:39.802: INFO: Got endpoints: latency-svc-6gnwd [1.718702034s]
Dec 17 21:54:39.807: INFO: Got endpoints: latency-svc-5nl2h [1.588841897s]
Dec 17 21:54:39.901: INFO: Created: latency-svc-bktbg
Dec 17 21:54:39.908: INFO: Got endpoints: latency-svc-bktbg [1.567239097s]
Dec 17 21:54:39.948: INFO: Created: latency-svc-tzb8k
Dec 17 21:54:39.979: INFO: Got endpoints: latency-svc-tzb8k [1.59914814s]
Dec 17 21:54:39.983: INFO: Created: latency-svc-dj6gq
Dec 17 21:54:39.989: INFO: Got endpoints: latency-svc-dj6gq [1.399698554s]
Dec 17 21:54:40.106: INFO: Created: latency-svc-xsqz8
Dec 17 21:54:40.108: INFO: Got endpoints: latency-svc-xsqz8 [1.452464192s]
Dec 17 21:54:40.241: INFO: Created: latency-svc-6b6t7
Dec 17 21:54:40.241: INFO: Got endpoints: latency-svc-6b6t7 [1.503732461s]
Dec 17 21:54:40.285: INFO: Created: latency-svc-9pnqt
Dec 17 21:54:40.289: INFO: Got endpoints: latency-svc-9pnqt [1.52170733s]
Dec 17 21:54:40.326: INFO: Created: latency-svc-pqpb8
Dec 17 21:54:40.333: INFO: Got endpoints: latency-svc-pqpb8 [1.516156154s]
Dec 17 21:54:40.430: INFO: Created: latency-svc-5krft
Dec 17 21:54:40.439: INFO: Got endpoints: latency-svc-5krft [1.443161543s]
Dec 17 21:54:40.496: INFO: Created: latency-svc-skgnz
Dec 17 21:54:40.510: INFO: Got endpoints: latency-svc-skgnz [1.318218575s]
Dec 17 21:54:40.703: INFO: Created: latency-svc-nfkvf
Dec 17 21:54:40.729: INFO: Got endpoints: latency-svc-nfkvf [1.37862948s]
Dec 17 21:54:40.918: INFO: Created: latency-svc-748nr
Dec 17 21:54:40.928: INFO: Got endpoints: latency-svc-748nr [1.544478741s]
Dec 17 21:54:41.217: INFO: Created: latency-svc-rfv8w
Dec 17 21:54:41.248: INFO: Got endpoints: latency-svc-rfv8w [1.718223251s]
Dec 17 21:54:41.410: INFO: Created: latency-svc-csnrk
Dec 17 21:54:41.422: INFO: Got endpoints: latency-svc-csnrk [1.825724163s]
Dec 17 21:54:41.501: INFO: Created: latency-svc-rqf8r
Dec 17 21:54:41.508: INFO: Got endpoints: latency-svc-rqf8r [1.705144224s]
Dec 17 21:54:41.873: INFO: Created: latency-svc-mx5ws
Dec 17 21:54:41.876: INFO: Got endpoints: latency-svc-mx5ws [2.068925723s]
Dec 17 21:54:41.957: INFO: Created: latency-svc-mmv9s
Dec 17 21:54:42.125: INFO: Got endpoints: latency-svc-mmv9s [2.216937636s]
Dec 17 21:54:42.206: INFO: Created: latency-svc-84vhf
Dec 17 21:54:42.331: INFO: Got endpoints: latency-svc-84vhf [2.352176451s]
Dec 17 21:54:42.415: INFO: Created: latency-svc-rqxls
Dec 17 21:54:42.424: INFO: Got endpoints: latency-svc-rqxls [2.435404629s]
Dec 17 21:54:42.517: INFO: Created: latency-svc-78wqg
Dec 17 21:54:42.521: INFO: Got endpoints: latency-svc-78wqg [2.412117891s]
Dec 17 21:54:42.568: INFO: Created: latency-svc-zlmhg
Dec 17 21:54:42.592: INFO: Got endpoints: latency-svc-zlmhg [2.351587262s]
Dec 17 21:54:42.685: INFO: Created: latency-svc-kq5gd
Dec 17 21:54:42.688: INFO: Got endpoints: latency-svc-kq5gd [2.399245913s]
Dec 17 21:54:42.717: INFO: Created: latency-svc-d62hd
Dec 17 21:54:42.726: INFO: Got endpoints: latency-svc-d62hd [2.39238671s]
Dec 17 21:54:42.775: INFO: Created: latency-svc-xtl7d
Dec 17 21:54:42.841: INFO: Got endpoints: latency-svc-xtl7d [2.401771666s]
Dec 17 21:54:42.899: INFO: Created: latency-svc-g45b4
Dec 17 21:54:42.900: INFO: Got endpoints: latency-svc-g45b4 [2.389638435s]
Dec 17 21:54:43.194: INFO: Created: latency-svc-swhm8
Dec 17 21:54:43.197: INFO: Got endpoints: latency-svc-swhm8 [2.467243113s]
Dec 17 21:54:43.349: INFO: Created: latency-svc-p2ww8
Dec 17 21:54:43.351: INFO: Got endpoints: latency-svc-p2ww8 [2.422937435s]
Dec 17 21:54:43.399: INFO: Created: latency-svc-f97c9
Dec 17 21:54:43.508: INFO: Got endpoints: latency-svc-f97c9 [2.258654814s]
Dec 17 21:54:43.512: INFO: Created: latency-svc-8qv4n
Dec 17 21:54:43.519: INFO: Got endpoints: latency-svc-8qv4n [2.097383094s]
Dec 17 21:54:43.582: INFO: Created: latency-svc-hnw2c
Dec 17 21:54:43.604: INFO: Got endpoints: latency-svc-hnw2c [2.096025165s]
Dec 17 21:54:43.813: INFO: Created: latency-svc-j6cbf
Dec 17 21:54:43.825: INFO: Got endpoints: latency-svc-j6cbf [1.949058089s]
Dec 17 21:54:43.912: INFO: Created: latency-svc-fz948
Dec 17 21:54:44.031: INFO: Got endpoints: latency-svc-fz948 [1.905641164s]
Dec 17 21:54:44.060: INFO: Created: latency-svc-j5rlh
Dec 17 21:54:44.067: INFO: Got endpoints: latency-svc-j5rlh [1.735226375s]
Dec 17 21:54:44.128: INFO: Created: latency-svc-cx7kj
Dec 17 21:54:44.215: INFO: Got endpoints: latency-svc-cx7kj [1.790494349s]
Dec 17 21:54:44.225: INFO: Created: latency-svc-w2dft
Dec 17 21:54:44.227: INFO: Got endpoints: latency-svc-w2dft [1.705908083s]
Dec 17 21:54:44.261: INFO: Created: latency-svc-nxh4l
Dec 17 21:54:44.264: INFO: Got endpoints: latency-svc-nxh4l [1.67076409s]
Dec 17 21:54:44.306: INFO: Created: latency-svc-f2x86
Dec 17 21:54:44.312: INFO: Got endpoints: latency-svc-f2x86 [1.623178392s]
Dec 17 21:54:44.399: INFO: Created: latency-svc-ps95v
Dec 17 21:54:44.406: INFO: Got endpoints: latency-svc-ps95v [1.679813269s]
Dec 17 21:54:44.447: INFO: Created: latency-svc-fhvtl
Dec 17 21:54:44.458: INFO: Got endpoints: latency-svc-fhvtl [1.616892922s]
Dec 17 21:54:44.585: INFO: Created: latency-svc-tw7dz
Dec 17 21:54:44.590: INFO: Got endpoints: latency-svc-tw7dz [1.690191213s]
Dec 17 21:54:44.639: INFO: Created: latency-svc-wtqzk
Dec 17 21:54:44.639: INFO: Got endpoints: latency-svc-wtqzk [1.441928192s]
Dec 17 21:54:44.674: INFO: Created: latency-svc-mcwh9
Dec 17 21:54:44.680: INFO: Got endpoints: latency-svc-mcwh9 [1.328566437s]
Dec 17 21:54:44.833: INFO: Created: latency-svc-75bp5
Dec 17 21:54:44.834: INFO: Got endpoints: latency-svc-75bp5 [1.325507643s]
Dec 17 21:54:44.959: INFO: Created: latency-svc-74dx7
Dec 17 21:54:44.963: INFO: Got endpoints: latency-svc-74dx7 [1.443481809s]
Dec 17 21:54:45.048: INFO: Created: latency-svc-l48rj
Dec 17 21:54:45.183: INFO: Got endpoints: latency-svc-l48rj [1.579097862s]
Dec 17 21:54:45.267: INFO: Created: latency-svc-nnwv2
Dec 17 21:54:45.424: INFO: Got endpoints: latency-svc-nnwv2 [1.598800257s]
Dec 17 21:54:45.427: INFO: Created: latency-svc-6wpmr
Dec 17 21:54:45.439: INFO: Got endpoints: latency-svc-6wpmr [1.407586701s]
Dec 17 21:54:45.498: INFO: Created: latency-svc-v5gkw
Dec 17 21:54:45.511: INFO: Got endpoints: latency-svc-v5gkw [1.444051021s]
Dec 17 21:54:45.632: INFO: Created: latency-svc-5phjb
Dec 17 21:54:45.633: INFO: Got endpoints: latency-svc-5phjb [1.418017751s]
Dec 17 21:54:45.702: INFO: Created: latency-svc-8xgj6
Dec 17 21:54:45.857: INFO: Got endpoints: latency-svc-8xgj6 [1.63042047s]
Dec 17 21:54:45.870: INFO: Created: latency-svc-sb6dw
Dec 17 21:54:45.880: INFO: Got endpoints: latency-svc-sb6dw [1.616429558s]
Dec 17 21:54:45.941: INFO: Created: latency-svc-lmzmg
Dec 17 21:54:45.942: INFO: Got endpoints: latency-svc-lmzmg [1.630366038s]
Dec 17 21:54:46.047: INFO: Created: latency-svc-stzss
Dec 17 21:54:46.047: INFO: Got endpoints: latency-svc-stzss [1.640977973s]
Dec 17 21:54:46.094: INFO: Created: latency-svc-bvltf
Dec 17 21:54:46.113: INFO: Got endpoints: latency-svc-bvltf [1.654345141s]
Dec 17 21:54:46.209: INFO: Created: latency-svc-6qzcr
Dec 17 21:54:46.211: INFO: Got endpoints: latency-svc-6qzcr [1.621112698s]
Dec 17 21:54:46.275: INFO: Created: latency-svc-rbk9r
Dec 17 21:54:46.284: INFO: Got endpoints: latency-svc-rbk9r [1.644790937s]
Dec 17 21:54:46.391: INFO: Created: latency-svc-8xlrg
Dec 17 21:54:46.395: INFO: Got endpoints: latency-svc-8xlrg [1.714354067s]
Dec 17 21:54:46.437: INFO: Created: latency-svc-g4qnd
Dec 17 21:54:46.471: INFO: Got endpoints: latency-svc-g4qnd [1.636635015s]
Dec 17 21:54:46.475: INFO: Created: latency-svc-k29vx
Dec 17 21:54:46.480: INFO: Got endpoints: latency-svc-k29vx [1.516217155s]
Dec 17 21:54:46.571: INFO: Created: latency-svc-z8pvj
Dec 17 21:54:46.577: INFO: Got endpoints: latency-svc-z8pvj [1.392638964s]
Dec 17 21:54:46.618: INFO: Created: latency-svc-rsdl6
Dec 17 21:54:46.652: INFO: Got endpoints: latency-svc-rsdl6 [1.227155837s]
Dec 17 21:54:46.654: INFO: Created: latency-svc-886nm
Dec 17 21:54:46.730: INFO: Got endpoints: latency-svc-886nm [1.290051207s]
Dec 17 21:54:46.758: INFO: Created: latency-svc-5j7l2
Dec 17 21:54:46.764: INFO: Got endpoints: latency-svc-5j7l2 [1.2531412s]
Dec 17 21:54:46.850: INFO: Created: latency-svc-mqfm7
Dec 17 21:54:46.935: INFO: Got endpoints: latency-svc-mqfm7 [1.301108507s]
Dec 17 21:54:46.944: INFO: Created: latency-svc-vxtsh
Dec 17 21:54:46.946: INFO: Got endpoints: latency-svc-vxtsh [1.087698063s]
Dec 17 21:54:47.034: INFO: Created: latency-svc-kmblg
Dec 17 21:54:47.115: INFO: Got endpoints: latency-svc-kmblg [1.234853319s]
Dec 17 21:54:47.155: INFO: Created: latency-svc-9wswr
Dec 17 21:54:47.161: INFO: Got endpoints: latency-svc-9wswr [1.217936054s]
Dec 17 21:54:47.394: INFO: Created: latency-svc-d4m4k
Dec 17 21:54:47.405: INFO: Got endpoints: latency-svc-d4m4k [1.357821284s]
Dec 17 21:54:47.473: INFO: Created: latency-svc-xh6zp
Dec 17 21:54:47.477: INFO: Got endpoints: latency-svc-xh6zp [1.36339221s]
Dec 17 21:54:47.610: INFO: Created: latency-svc-xtlsz
Dec 17 21:54:47.616: INFO: Got endpoints: latency-svc-xtlsz [1.404527475s]
Dec 17 21:54:47.755: INFO: Created: latency-svc-g6w99
Dec 17 21:54:47.756: INFO: Got endpoints: latency-svc-g6w99 [1.471427416s]
Dec 17 21:54:47.837: INFO: Created: latency-svc-h2pfc
Dec 17 21:54:47.937: INFO: Got endpoints: latency-svc-h2pfc [1.5423042s]
Dec 17 21:54:47.963: INFO: Created: latency-svc-pzvmn
Dec 17 21:54:47.980: INFO: Got endpoints: latency-svc-pzvmn [1.508987747s]
Dec 17 21:54:48.015: INFO: Created: latency-svc-4djmk
Dec 17 21:54:48.023: INFO: Got endpoints: latency-svc-4djmk [1.543011861s]
Dec 17 21:54:48.134: INFO: Created: latency-svc-dpqgg
Dec 17 21:54:48.141: INFO: Got endpoints: latency-svc-dpqgg [1.56426686s]
Dec 17 21:54:48.178: INFO: Created: latency-svc-pm5jj
Dec 17 21:54:48.185: INFO: Got endpoints: latency-svc-pm5jj [1.532856543s]
Dec 17 21:54:48.237: INFO: Created: latency-svc-m89wv
Dec 17 21:54:48.318: INFO: Got endpoints: latency-svc-m89wv [1.588107065s]
Dec 17 21:54:48.339: INFO: Created: latency-svc-h2x5z
Dec 17 21:54:48.351: INFO: Got endpoints: latency-svc-h2x5z [1.586738495s]
Dec 17 21:54:48.394: INFO: Created: latency-svc-4tlws
Dec 17 21:54:48.397: INFO: Got endpoints: latency-svc-4tlws [1.46268409s]
Dec 17 21:54:48.564: INFO: Created: latency-svc-6rdmq
Dec 17 21:54:48.570: INFO: Got endpoints: latency-svc-6rdmq [1.624344892s]
Dec 17 21:54:48.618: INFO: Created: latency-svc-mrst7
Dec 17 21:54:48.618: INFO: Got endpoints: latency-svc-mrst7 [1.502603351s]
Dec 17 21:54:48.728: INFO: Created: latency-svc-rmcck
Dec 17 21:54:48.731: INFO: Got endpoints: latency-svc-rmcck [1.569857221s]
Dec 17 21:54:48.774: INFO: Created: latency-svc-5gkw2
Dec 17 21:54:48.779: INFO: Got endpoints: latency-svc-5gkw2 [1.373721393s]
Dec 17 21:54:48.810: INFO: Created: latency-svc-vvxhl
Dec 17 21:54:48.900: INFO: Got endpoints: latency-svc-vvxhl [1.423205445s]
Dec 17 21:54:48.918: INFO: Created: latency-svc-45lz2
Dec 17 21:54:48.926: INFO: Got endpoints: latency-svc-45lz2 [1.309992743s]
Dec 17 21:54:48.987: INFO: Created: latency-svc-cp6jk
Dec 17 21:54:48.991: INFO: Got endpoints: latency-svc-cp6jk [1.234312075s]
Dec 17 21:54:49.088: INFO: Created: latency-svc-hsjv6
Dec 17 21:54:49.100: INFO: Got endpoints: latency-svc-hsjv6 [1.163078932s]
Dec 17 21:54:49.151: INFO: Created: latency-svc-tgzfc
Dec 17 21:54:49.152: INFO: Got endpoints: latency-svc-tgzfc [1.171368575s]
Dec 17 21:54:49.333: INFO: Created: latency-svc-gd5dx
Dec 17 21:54:49.343: INFO: Got endpoints: latency-svc-gd5dx [1.319439578s]
Dec 17 21:54:49.440: INFO: Created: latency-svc-8mpzf
Dec 17 21:54:49.577: INFO: Got endpoints: latency-svc-8mpzf [1.435056934s]
Dec 17 21:54:49.639: INFO: Created: latency-svc-95rw2
Dec 17 21:54:49.667: INFO: Got endpoints: latency-svc-95rw2 [1.482071355s]
Dec 17 21:54:49.825: INFO: Created: latency-svc-78579
Dec 17 21:54:49.829: INFO: Got endpoints: latency-svc-78579 [1.510709309s]
Dec 17 21:54:50.044: INFO: Created: latency-svc-62kz5
Dec 17 21:54:50.050: INFO: Got endpoints: latency-svc-62kz5 [1.698129943s]
Dec 17 21:54:50.230: INFO: Created: latency-svc-5xgnh
Dec 17 21:54:50.231: INFO: Got endpoints: latency-svc-5xgnh [1.83347256s]
Dec 17 21:54:50.317: INFO: Created: latency-svc-7nnn5
Dec 17 21:54:50.534: INFO: Got endpoints: latency-svc-7nnn5 [1.96407176s]
Dec 17 21:54:50.580: INFO: Created: latency-svc-hgp52
Dec 17 21:54:50.582: INFO: Got endpoints: latency-svc-hgp52 [1.963704283s]
Dec 17 21:54:50.814: INFO: Created: latency-svc-n5542
Dec 17 21:54:50.833: INFO: Got endpoints: latency-svc-n5542 [2.102250163s]
Dec 17 21:54:50.917: INFO: Created: latency-svc-vfkm2
Dec 17 21:54:51.048: INFO: Got endpoints: latency-svc-vfkm2 [2.269393518s]
Dec 17 21:54:51.087: INFO: Created: latency-svc-lv247
Dec 17 21:54:51.091: INFO: Got endpoints: latency-svc-lv247 [2.190171603s]
Dec 17 21:54:51.279: INFO: Created: latency-svc-gg8w4
Dec 17 21:54:51.291: INFO: Got endpoints: latency-svc-gg8w4 [2.36471473s]
Dec 17 21:54:51.428: INFO: Created: latency-svc-dj8dd
Dec 17 21:54:51.440: INFO: Got endpoints: latency-svc-dj8dd [2.449298052s]
Dec 17 21:54:51.485: INFO: Created: latency-svc-r97gt
Dec 17 21:54:51.498: INFO: Got endpoints: latency-svc-r97gt [2.397763054s]
Dec 17 21:54:51.620: INFO: Created: latency-svc-j969l
Dec 17 21:54:51.643: INFO: Got endpoints: latency-svc-j969l [2.490878211s]
Dec 17 21:54:51.694: INFO: Created: latency-svc-2wg2v
Dec 17 21:54:51.695: INFO: Got endpoints: latency-svc-2wg2v [2.352408819s]
Dec 17 21:54:51.888: INFO: Created: latency-svc-2s4tr
Dec 17 21:54:52.054: INFO: Got endpoints: latency-svc-2s4tr [2.4776024s]
Dec 17 21:54:52.057: INFO: Created: latency-svc-n8tdq
Dec 17 21:54:52.065: INFO: Got endpoints: latency-svc-n8tdq [2.397268245s]
Dec 17 21:54:52.137: INFO: Created: latency-svc-ss67n
Dec 17 21:54:52.137: INFO: Got endpoints: latency-svc-ss67n [2.30779809s]
Dec 17 21:54:52.221: INFO: Created: latency-svc-dmjb6
Dec 17 21:54:52.221: INFO: Got endpoints: latency-svc-dmjb6 [2.171298329s]
Dec 17 21:54:52.273: INFO: Created: latency-svc-h4gp8
Dec 17 21:54:52.279: INFO: Got endpoints: latency-svc-h4gp8 [2.047355108s]
Dec 17 21:54:52.405: INFO: Created: latency-svc-v277b
Dec 17 21:54:52.408: INFO: Got endpoints: latency-svc-v277b [1.872953141s]
Dec 17 21:54:52.462: INFO: Created: latency-svc-kc8nc
Dec 17 21:54:52.474: INFO: Got endpoints: latency-svc-kc8nc [1.891694475s]
Dec 17 21:54:52.548: INFO: Created: latency-svc-csjjt
Dec 17 21:54:52.553: INFO: Got endpoints: latency-svc-csjjt [1.719471006s]
Dec 17 21:54:52.593: INFO: Created: latency-svc-l8xlc
Dec 17 21:54:52.616: INFO: Got endpoints: latency-svc-l8xlc [1.566201223s]
Dec 17 21:54:52.758: INFO: Created: latency-svc-2tp6p
Dec 17 21:54:52.759: INFO: Got endpoints: latency-svc-2tp6p [1.667511032s]
Dec 17 21:54:52.801: INFO: Created: latency-svc-555j7
Dec 17 21:54:52.807: INFO: Got endpoints: latency-svc-555j7 [1.516121617s]
Dec 17 21:54:53.001: INFO: Created: latency-svc-jvbmg
Dec 17 21:54:53.014: INFO: Got endpoints: latency-svc-jvbmg [1.573911016s]
Dec 17 21:54:53.088: INFO: Created: latency-svc-92lv8
Dec 17 21:54:53.094: INFO: Got endpoints: latency-svc-92lv8 [1.59520284s]
Dec 17 21:54:53.372: INFO: Created: latency-svc-gn7l4
Dec 17 21:54:53.392: INFO: Got endpoints: latency-svc-gn7l4 [1.748873928s]
Dec 17 21:54:53.562: INFO: Created: latency-svc-qqjgp
Dec 17 21:54:53.569: INFO: Got endpoints: latency-svc-qqjgp [1.873495763s]
Dec 17 21:54:53.618: INFO: Created: latency-svc-2sz72
Dec 17 21:54:53.633: INFO: Got endpoints: latency-svc-2sz72 [1.578917458s]
Dec 17 21:54:53.811: INFO: Created: latency-svc-m8kz6
Dec 17 21:54:53.811: INFO: Got endpoints: latency-svc-m8kz6 [1.746651041s]
Dec 17 21:54:54.025: INFO: Created: latency-svc-dc49s
Dec 17 21:54:54.036: INFO: Got endpoints: latency-svc-dc49s [1.898417227s]
Dec 17 21:54:54.103: INFO: Created: latency-svc-sbc9m
Dec 17 21:54:54.105: INFO: Got endpoints: latency-svc-sbc9m [1.884191748s]
Dec 17 21:54:54.184: INFO: Created: latency-svc-k9dk6
Dec 17 21:54:54.197: INFO: Got endpoints: latency-svc-k9dk6 [1.918364871s]
Dec 17 21:54:54.258: INFO: Created: latency-svc-n6m8n
Dec 17 21:54:54.397: INFO: Got endpoints: latency-svc-n6m8n [1.988591857s]
Dec 17 21:54:54.457: INFO: Created: latency-svc-hjbrb
Dec 17 21:54:54.465: INFO: Got endpoints: latency-svc-hjbrb [1.990755636s]
Dec 17 21:54:54.566: INFO: Created: latency-svc-599kt
Dec 17 21:54:54.577: INFO: Got endpoints: latency-svc-599kt [2.0234695s]
Dec 17 21:54:54.608: INFO: Created: latency-svc-cwclf
Dec 17 21:54:54.614: INFO: Got endpoints: latency-svc-cwclf [1.997492973s]
Dec 17 21:54:54.655: INFO: Created: latency-svc-jqbdh
Dec 17 21:54:54.655: INFO: Got endpoints: latency-svc-jqbdh [1.896177471s]
Dec 17 21:54:54.728: INFO: Created: latency-svc-74nlr
Dec 17 21:54:54.734: INFO: Got endpoints: latency-svc-74nlr [1.926690991s]
Dec 17 21:54:54.763: INFO: Created: latency-svc-pjhcz
Dec 17 21:54:54.779: INFO: Got endpoints: latency-svc-pjhcz [1.764465944s]
Dec 17 21:54:54.862: INFO: Created: latency-svc-5vjvx
Dec 17 21:54:54.875: INFO: Got endpoints: latency-svc-5vjvx [1.781156234s]
Dec 17 21:54:54.949: INFO: Created: latency-svc-4llkg
Dec 17 21:54:54.953: INFO: Got endpoints: latency-svc-4llkg [1.561035349s]
Dec 17 21:54:55.069: INFO: Created: latency-svc-r6qjk
Dec 17 21:54:55.073: INFO: Got endpoints: latency-svc-r6qjk [1.50417643s]
Dec 17 21:54:55.128: INFO: Created: latency-svc-46sbz
Dec 17 21:54:55.133: INFO: Got endpoints: latency-svc-46sbz [1.499346391s]
Dec 17 21:54:55.303: INFO: Created: latency-svc-74z7t
Dec 17 21:54:55.303: INFO: Got endpoints: latency-svc-74z7t [1.49163455s]
Dec 17 21:54:55.380: INFO: Created: latency-svc-v22f5
Dec 17 21:54:55.386: INFO: Got endpoints: latency-svc-v22f5 [1.350050617s]
Dec 17 21:54:55.472: INFO: Created: latency-svc-nrjvk
Dec 17 21:54:55.472: INFO: Got endpoints: latency-svc-nrjvk [1.366194354s]
Dec 17 21:54:55.530: INFO: Created: latency-svc-pmlxd
Dec 17 21:54:55.548: INFO: Got endpoints: latency-svc-pmlxd [1.350810119s]
Dec 17 21:54:55.629: INFO: Created: latency-svc-mb6br
Dec 17 21:54:55.633: INFO: Got endpoints: latency-svc-mb6br [1.235979163s]
Dec 17 21:54:55.662: INFO: Created: latency-svc-rzscw
Dec 17 21:54:55.690: INFO: Got endpoints: latency-svc-rzscw [1.224084256s]
Dec 17 21:54:55.694: INFO: Created: latency-svc-5gx78
Dec 17 21:54:55.704: INFO: Got endpoints: latency-svc-5gx78 [1.126702062s]
Dec 17 21:54:55.842: INFO: Created: latency-svc-k2g9m
Dec 17 21:54:55.849: INFO: Got endpoints: latency-svc-k2g9m [1.234605085s]
Dec 17 21:54:55.945: INFO: Created: latency-svc-xbcz7
Dec 17 21:54:56.008: INFO: Got endpoints: latency-svc-xbcz7 [1.353076287s]
Dec 17 21:54:56.060: INFO: Created: latency-svc-8b98k
Dec 17 21:54:56.062: INFO: Got endpoints: latency-svc-8b98k [1.327239132s]
Dec 17 21:54:56.206: INFO: Created: latency-svc-m6zr2
Dec 17 21:54:56.206: INFO: Got endpoints: latency-svc-m6zr2 [1.427206485s]
Dec 17 21:54:56.264: INFO: Created: latency-svc-r9n5j
Dec 17 21:54:56.355: INFO: Got endpoints: latency-svc-r9n5j [1.479486232s]
Dec 17 21:54:56.359: INFO: Created: latency-svc-njrv4
Dec 17 21:54:56.364: INFO: Got endpoints: latency-svc-njrv4 [1.410338169s]
Dec 17 21:54:56.420: INFO: Created: latency-svc-cfb6d
Dec 17 21:54:56.421: INFO: Got endpoints: latency-svc-cfb6d [1.347698763s]
Dec 17 21:54:56.557: INFO: Created: latency-svc-vrnc4
Dec 17 21:54:56.580: INFO: Got endpoints: latency-svc-vrnc4 [1.446613608s]
Dec 17 21:54:56.585: INFO: Created: latency-svc-fb2sd
Dec 17 21:54:56.593: INFO: Got endpoints: latency-svc-fb2sd [1.28880895s]
Dec 17 21:54:56.685: INFO: Created: latency-svc-jxg4s
Dec 17 21:54:56.691: INFO: Got endpoints: latency-svc-jxg4s [1.304240149s]
Dec 17 21:54:56.734: INFO: Created: latency-svc-79xwx
Dec 17 21:54:56.749: INFO: Got endpoints: latency-svc-79xwx [1.277314165s]
Dec 17 21:54:56.780: INFO: Created: latency-svc-b4rmp
Dec 17 21:54:56.780: INFO: Got endpoints: latency-svc-b4rmp [1.231537322s]
Dec 17 21:54:56.877: INFO: Created: latency-svc-2t4f5
Dec 17 21:54:56.889: INFO: Got endpoints: latency-svc-2t4f5 [1.255416161s]
Dec 17 21:54:57.061: INFO: Created: latency-svc-rgxr2
Dec 17 21:54:57.062: INFO: Got endpoints: latency-svc-rgxr2 [1.372462693s]
Dec 17 21:54:57.102: INFO: Created: latency-svc-gh2v6
Dec 17 21:54:57.110: INFO: Got endpoints: latency-svc-gh2v6 [1.405917125s]
Dec 17 21:54:57.146: INFO: Created: latency-svc-pn85l
Dec 17 21:54:57.153: INFO: Got endpoints: latency-svc-pn85l [1.303611238s]
Dec 17 21:54:57.276: INFO: Created: latency-svc-pwt2c
Dec 17 21:54:57.280: INFO: Got endpoints: latency-svc-pwt2c [1.271481213s]
Dec 17 21:54:57.473: INFO: Created: latency-svc-wldg4
Dec 17 21:54:57.474: INFO: Got endpoints: latency-svc-wldg4 [1.411811272s]
Dec 17 21:54:57.539: INFO: Created: latency-svc-vj66n
Dec 17 21:54:57.563: INFO: Got endpoints: latency-svc-vj66n [1.356397867s]
Dec 17 21:54:57.647: INFO: Created: latency-svc-8m2mx
Dec 17 21:54:57.650: INFO: Got endpoints: latency-svc-8m2mx [1.294770249s]
Dec 17 21:54:57.650: INFO: Latencies: [157.704681ms 161.202491ms 258.456436ms 316.509233ms 414.039835ms 480.044476ms 608.5198ms 663.504971ms 800.448686ms 861.892468ms 972.885663ms 1.036298994s 1.087698063s 1.126702062s 1.163078932s 1.171368575s 1.217936054s 1.224084256s 1.227155837s 1.231537322s 1.234312075s 1.234605085s 1.234853319s 1.235979163s 1.2531412s 1.255416161s 1.271481213s 1.277314165s 1.28880895s 1.290051207s 1.294770249s 1.301108507s 1.303611238s 1.304240149s 1.309992743s 1.318218575s 1.319439578s 1.325507643s 1.327239132s 1.328566437s 1.333518972s 1.344907568s 1.345282657s 1.347698763s 1.350050617s 1.350810119s 1.353076287s 1.356397867s 1.356975314s 1.357821284s 1.36339221s 1.366194354s 1.372462693s 1.373721393s 1.37862948s 1.392638964s 1.399698554s 1.404527475s 1.405917125s 1.407586701s 1.410338169s 1.411811272s 1.418017751s 1.423205445s 1.427206485s 1.435056934s 1.441928192s 1.443161543s 1.443481809s 1.444051021s 1.446613608s 1.452464192s 1.46268409s 1.471427416s 1.479486232s 1.482071355s 1.49163455s 1.498235548s 1.499346391s 1.502603351s 1.503732461s 1.50417643s 1.508987747s 1.510709309s 1.512226651s 1.51540501s 1.516121617s 1.516156154s 1.516217155s 1.52170733s 1.524921948s 1.530529927s 1.532856543s 1.539858716s 1.5423042s 1.543011861s 1.544478741s 1.545904463s 1.550282276s 1.561035349s 1.56426686s 1.566201223s 1.567239097s 1.569857221s 1.570641116s 1.571933658s 1.573911016s 1.578917458s 1.579097862s 1.58143935s 1.58319818s 1.586738495s 1.58749869s 1.588107065s 1.588841897s 1.59520284s 1.595949412s 1.598141592s 1.598800257s 1.59914814s 1.60717654s 1.616429558s 1.616892922s 1.621112698s 1.623178392s 1.624344892s 1.630366038s 1.63042047s 1.636635015s 1.640977973s 1.644790937s 1.646541418s 1.654345141s 1.667511032s 1.67076409s 1.671523601s 1.672541527s 1.679813269s 1.690191213s 1.698129943s 1.705144224s 1.705908083s 1.712860057s 1.714354067s 1.718223251s 1.718702034s 1.719471006s 1.735226375s 1.739173384s 1.746651041s 1.748873928s 1.764465944s 1.781156234s 1.790494349s 1.825724163s 1.83347256s 1.872953141s 1.873495763s 1.884191748s 1.891694475s 1.896177471s 1.898417227s 1.905641164s 1.918364871s 1.926690991s 1.949058089s 1.963704283s 1.96407176s 1.988591857s 1.990755636s 1.997492973s 2.0234695s 2.047355108s 2.068925723s 2.096025165s 2.097383094s 2.102250163s 2.171298329s 2.190171603s 2.216937636s 2.258654814s 2.269393518s 2.30779809s 2.351587262s 2.352176451s 2.352408819s 2.36471473s 2.389638435s 2.39238671s 2.397268245s 2.397763054s 2.399245913s 2.401771666s 2.412117891s 2.422937435s 2.435404629s 2.449298052s 2.467243113s 2.4776024s 2.490878211s]
Dec 17 21:54:57.651: INFO: 50 %ile: 1.56426686s
Dec 17 21:54:57.651: INFO: 90 %ile: 2.258654814s
Dec 17 21:54:57.651: INFO: 99 %ile: 2.4776024s
Dec 17 21:54:57.651: INFO: Total sample count: 200
[AfterEach] [sig-network] Service endpoints latency
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:54:57.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svc-latency-2582" for this suite.
Dec 17 21:55:39.704: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:55:39.827: INFO: namespace svc-latency-2582 deletion completed in 42.167697142s

• [SLOW TEST:71.432 seconds]
[sig-network] Service endpoints latency
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should not be very high  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:55:39.828: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating the pod
Dec 17 21:55:48.891: INFO: Successfully updated pod "annotationupdate8bb8d4ac-428e-4b39-af53-25fddae1126b"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:55:50.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-417" for this suite.
Dec 17 21:56:02.985: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:56:03.102: INFO: namespace projected-417 deletion completed in 12.144823852s

• [SLOW TEST:23.274 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run rc 
  should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:56:03.104: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1439
[It] should create an rc from an image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 17 21:56:03.274: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5406'
Dec 17 21:56:03.495: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 17 21:56:03.496: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created
STEP: confirm that you can get logs from an rc
Dec 17 21:56:03.542: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-vcjsb]
Dec 17 21:56:03.542: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-vcjsb" in namespace "kubectl-5406" to be "running and ready"
Dec 17 21:56:03.616: INFO: Pod "e2e-test-httpd-rc-vcjsb": Phase="Pending", Reason="", readiness=false. Elapsed: 74.010299ms
Dec 17 21:56:05.630: INFO: Pod "e2e-test-httpd-rc-vcjsb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087201555s
Dec 17 21:56:07.642: INFO: Pod "e2e-test-httpd-rc-vcjsb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.099652388s
Dec 17 21:56:09.657: INFO: Pod "e2e-test-httpd-rc-vcjsb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.114675812s
Dec 17 21:56:11.667: INFO: Pod "e2e-test-httpd-rc-vcjsb": Phase="Running", Reason="", readiness=true. Elapsed: 8.124774138s
Dec 17 21:56:11.667: INFO: Pod "e2e-test-httpd-rc-vcjsb" satisfied condition "running and ready"
Dec 17 21:56:11.667: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-vcjsb]
Dec 17 21:56:11.668: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-5406'
Dec 17 21:56:11.965: INFO: stderr: ""
Dec 17 21:56:11.965: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.44.0.1. Set the 'ServerName' directive globally to suppress this message\n[Tue Dec 17 21:56:09.309521 2019] [mpm_event:notice] [pid 1:tid 140204039326568] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue Dec 17 21:56:09.309597 2019] [core:notice] [pid 1:tid 140204039326568] AH00094: Command line: 'httpd -D FOREGROUND'\n"
[AfterEach] Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1444
Dec 17 21:56:11.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5406'
Dec 17 21:56:12.156: INFO: stderr: ""
Dec 17 21:56:12.156: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:56:12.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5406" for this suite.
Dec 17 21:56:40.243: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:56:40.400: INFO: namespace kubectl-5406 deletion completed in 28.1894185s

• [SLOW TEST:37.297 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run rc
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1435
    should create an rc from an image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:56:40.401: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 21:56:40.496: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Dec 17 21:56:44.299: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1283 create -f -'
Dec 17 21:56:47.276: INFO: stderr: ""
Dec 17 21:56:47.276: INFO: stdout: "e2e-test-crd-publish-openapi-376-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Dec 17 21:56:47.277: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1283 delete e2e-test-crd-publish-openapi-376-crds test-cr'
Dec 17 21:56:47.423: INFO: stderr: ""
Dec 17 21:56:47.423: INFO: stdout: "e2e-test-crd-publish-openapi-376-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
Dec 17 21:56:47.423: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1283 apply -f -'
Dec 17 21:56:47.961: INFO: stderr: ""
Dec 17 21:56:47.961: INFO: stdout: "e2e-test-crd-publish-openapi-376-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n"
Dec 17 21:56:47.961: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-1283 delete e2e-test-crd-publish-openapi-376-crds test-cr'
Dec 17 21:56:48.123: INFO: stderr: ""
Dec 17 21:56:48.123: INFO: stdout: "e2e-test-crd-publish-openapi-376-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR without validation schema
Dec 17 21:56:48.123: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-376-crds'
Dec 17 21:56:48.517: INFO: stderr: ""
Dec 17 21:56:48.517: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-376-crd\nVERSION:  crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n     \n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:56:52.305: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-1283" for this suite.
Dec 17 21:56:58.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:56:58.460: INFO: namespace crd-publish-openapi-1283 deletion completed in 6.147835016s

• [SLOW TEST:18.060 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD without validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:56:58.461: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 17 21:56:58.660: INFO: Waiting up to 5m0s for pod "pod-d78209d8-c0ae-4fee-bb88-86c6ff516646" in namespace "emptydir-5970" to be "success or failure"
Dec 17 21:56:58.677: INFO: Pod "pod-d78209d8-c0ae-4fee-bb88-86c6ff516646": Phase="Pending", Reason="", readiness=false. Elapsed: 16.811697ms
Dec 17 21:57:00.684: INFO: Pod "pod-d78209d8-c0ae-4fee-bb88-86c6ff516646": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024328903s
Dec 17 21:57:02.701: INFO: Pod "pod-d78209d8-c0ae-4fee-bb88-86c6ff516646": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041076886s
Dec 17 21:57:04.711: INFO: Pod "pod-d78209d8-c0ae-4fee-bb88-86c6ff516646": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051190539s
Dec 17 21:57:06.723: INFO: Pod "pod-d78209d8-c0ae-4fee-bb88-86c6ff516646": Phase="Running", Reason="", readiness=true. Elapsed: 8.063080692s
Dec 17 21:57:08.733: INFO: Pod "pod-d78209d8-c0ae-4fee-bb88-86c6ff516646": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073280233s
STEP: Saw pod success
Dec 17 21:57:08.734: INFO: Pod "pod-d78209d8-c0ae-4fee-bb88-86c6ff516646" satisfied condition "success or failure"
Dec 17 21:57:08.739: INFO: Trying to get logs from node jerma-node pod pod-d78209d8-c0ae-4fee-bb88-86c6ff516646 container test-container: 
STEP: delete the pod
Dec 17 21:57:08.819: INFO: Waiting for pod pod-d78209d8-c0ae-4fee-bb88-86c6ff516646 to disappear
Dec 17 21:57:08.832: INFO: Pod pod-d78209d8-c0ae-4fee-bb88-86c6ff516646 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:57:08.832: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5970" for this suite.
Dec 17 21:57:14.901: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:57:15.057: INFO: namespace emptydir-5970 deletion completed in 6.218235997s

• [SLOW TEST:16.596 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:57:15.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating a watch on configmaps with label A
STEP: creating a watch on configmaps with label B
STEP: creating a watch on configmaps with label A or B
STEP: creating a configmap with label A and ensuring the correct watchers observe the notification
Dec 17 21:57:15.171: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2466 /api/v1/namespaces/watch-2466/configmaps/e2e-watch-test-configmap-a e3ed8bdc-2aeb-495a-b66e-f29c17adc372 9145476 0 2019-12-17 21:57:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 17 21:57:15.172: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2466 /api/v1/namespaces/watch-2466/configmaps/e2e-watch-test-configmap-a e3ed8bdc-2aeb-495a-b66e-f29c17adc372 9145476 0 2019-12-17 21:57:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: modifying configmap A and ensuring the correct watchers observe the notification
Dec 17 21:57:25.199: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2466 /api/v1/namespaces/watch-2466/configmaps/e2e-watch-test-configmap-a e3ed8bdc-2aeb-495a-b66e-f29c17adc372 9145490 0 2019-12-17 21:57:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 17 21:57:25.200: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2466 /api/v1/namespaces/watch-2466/configmaps/e2e-watch-test-configmap-a e3ed8bdc-2aeb-495a-b66e-f29c17adc372 9145490 0 2019-12-17 21:57:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying configmap A again and ensuring the correct watchers observe the notification
Dec 17 21:57:35.467: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2466 /api/v1/namespaces/watch-2466/configmaps/e2e-watch-test-configmap-a e3ed8bdc-2aeb-495a-b66e-f29c17adc372 9145505 0 2019-12-17 21:57:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 17 21:57:35.468: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2466 /api/v1/namespaces/watch-2466/configmaps/e2e-watch-test-configmap-a e3ed8bdc-2aeb-495a-b66e-f29c17adc372 9145505 0 2019-12-17 21:57:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: deleting configmap A and ensuring the correct watchers observe the notification
Dec 17 21:57:45.484: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2466 /api/v1/namespaces/watch-2466/configmaps/e2e-watch-test-configmap-a e3ed8bdc-2aeb-495a-b66e-f29c17adc372 9145519 0 2019-12-17 21:57:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 17 21:57:45.484: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a  watch-2466 /api/v1/namespaces/watch-2466/configmaps/e2e-watch-test-configmap-a e3ed8bdc-2aeb-495a-b66e-f29c17adc372 9145519 0 2019-12-17 21:57:15 +0000 UTC   map[watch-this-configmap:multiple-watchers-A] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
STEP: creating a configmap with label B and ensuring the correct watchers observe the notification
Dec 17 21:57:55.508: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2466 /api/v1/namespaces/watch-2466/configmaps/e2e-watch-test-configmap-b 891221dd-1203-4178-88be-50b87b44aa13 9145534 0 2019-12-17 21:57:55 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 17 21:57:55.509: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2466 /api/v1/namespaces/watch-2466/configmaps/e2e-watch-test-configmap-b 891221dd-1203-4178-88be-50b87b44aa13 9145534 0 2019-12-17 21:57:55 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
STEP: deleting configmap B and ensuring the correct watchers observe the notification
Dec 17 21:58:05.524: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2466 /api/v1/namespaces/watch-2466/configmaps/e2e-watch-test-configmap-b 891221dd-1203-4178-88be-50b87b44aa13 9145548 0 2019-12-17 21:57:55 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 17 21:58:05.525: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b  watch-2466 /api/v1/namespaces/watch-2466/configmaps/e2e-watch-test-configmap-b 891221dd-1203-4178-88be-50b87b44aa13 9145548 0 2019-12-17 21:57:55 +0000 UTC   map[watch-this-configmap:multiple-watchers-B] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:58:15.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2466" for this suite.
Dec 17 21:58:21.627: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:58:21.804: INFO: namespace watch-2466 deletion completed in 6.257108209s

• [SLOW TEST:66.747 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe add, update, and delete watch notifications on configmaps [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:58:21.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward api env vars
Dec 17 21:58:22.331: INFO: Waiting up to 5m0s for pod "downward-api-1b04e382-e545-4bb9-a13a-74b1b17b3e61" in namespace "downward-api-20" to be "success or failure"
Dec 17 21:58:22.398: INFO: Pod "downward-api-1b04e382-e545-4bb9-a13a-74b1b17b3e61": Phase="Pending", Reason="", readiness=false. Elapsed: 66.914858ms
Dec 17 21:58:24.408: INFO: Pod "downward-api-1b04e382-e545-4bb9-a13a-74b1b17b3e61": Phase="Pending", Reason="", readiness=false. Elapsed: 2.077242705s
Dec 17 21:58:26.451: INFO: Pod "downward-api-1b04e382-e545-4bb9-a13a-74b1b17b3e61": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119804394s
Dec 17 21:58:28.463: INFO: Pod "downward-api-1b04e382-e545-4bb9-a13a-74b1b17b3e61": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132326357s
Dec 17 21:58:30.475: INFO: Pod "downward-api-1b04e382-e545-4bb9-a13a-74b1b17b3e61": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.143860578s
STEP: Saw pod success
Dec 17 21:58:30.475: INFO: Pod "downward-api-1b04e382-e545-4bb9-a13a-74b1b17b3e61" satisfied condition "success or failure"
Dec 17 21:58:30.481: INFO: Trying to get logs from node jerma-node pod downward-api-1b04e382-e545-4bb9-a13a-74b1b17b3e61 container dapi-container: 
STEP: delete the pod
Dec 17 21:58:30.618: INFO: Waiting for pod downward-api-1b04e382-e545-4bb9-a13a-74b1b17b3e61 to disappear
Dec 17 21:58:30.708: INFO: Pod downward-api-1b04e382-e545-4bb9-a13a-74b1b17b3e61 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:58:30.708: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-20" for this suite.
Dec 17 21:58:36.759: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:58:36.950: INFO: namespace downward-api-20 deletion completed in 6.230889025s

• [SLOW TEST:15.145 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:58:36.953: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 17 21:58:51.240: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 17 21:58:51.255: INFO: Pod pod-with-prestop-http-hook still exists
Dec 17 21:58:53.256: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 17 21:58:53.265: INFO: Pod pod-with-prestop-http-hook still exists
Dec 17 21:58:55.256: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 17 21:58:55.271: INFO: Pod pod-with-prestop-http-hook still exists
Dec 17 21:58:57.256: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 17 21:58:57.272: INFO: Pod pod-with-prestop-http-hook still exists
Dec 17 21:58:59.256: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 17 21:58:59.264: INFO: Pod pod-with-prestop-http-hook still exists
Dec 17 21:59:01.256: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 17 21:59:01.269: INFO: Pod pod-with-prestop-http-hook still exists
Dec 17 21:59:03.256: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 17 21:59:03.267: INFO: Pod pod-with-prestop-http-hook still exists
Dec 17 21:59:05.256: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 17 21:59:05.267: INFO: Pod pod-with-prestop-http-hook still exists
Dec 17 21:59:07.257: INFO: Waiting for pod pod-with-prestop-http-hook to disappear
Dec 17 21:59:07.269: INFO: Pod pod-with-prestop-http-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 21:59:07.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-3389" for this suite.
Dec 17 21:59:33.329: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 21:59:33.432: INFO: namespace container-lifecycle-hook-3389 deletion completed in 26.12966782s

• [SLOW TEST:56.479 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 21:59:33.432: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77
STEP: Creating service test in namespace statefulset-6396
[It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating stateful set ss in namespace statefulset-6396
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6396
Dec 17 21:59:33.598: INFO: Found 0 stateful pods, waiting for 1
Dec 17 21:59:43.616: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod
Dec 17 21:59:43.623: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 17 21:59:44.167: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Dec 17 21:59:44.168: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 17 21:59:44.168: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 17 21:59:44.183: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 17 21:59:54.195: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 17 21:59:54.195: INFO: Waiting for statefulset status.replicas updated to 0
Dec 17 21:59:54.271: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 17 21:59:54.271: INFO: ss-0  jerma-node  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:44 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  }]
Dec 17 21:59:54.271: INFO: 
Dec 17 21:59:54.271: INFO: StatefulSet ss has not reached scale 3, at 1
Dec 17 21:59:55.880: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.968184477s
Dec 17 21:59:56.992: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.358878274s
Dec 17 21:59:58.053: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.24716138s
Dec 17 21:59:59.064: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.186854782s
Dec 17 22:00:01.472: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.175773698s
Dec 17 22:00:02.864: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.766849784s
Dec 17 22:00:03.889: INFO: Verifying statefulset ss doesn't scale past 3 for another 375.312611ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6396
Dec 17 22:00:04.905: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:00:05.287: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Dec 17 22:00:05.287: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 17 22:00:05.287: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 17 22:00:05.288: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:00:05.712: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 17 22:00:05.713: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 17 22:00:05.713: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 17 22:00:05.713: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:00:06.173: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\n"
Dec 17 22:00:06.173: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 17 22:00:06.173: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 17 22:00:06.181: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 22:00:06.181: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 22:00:06.181: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Scale down will not halt with unhealthy stateful pod
Dec 17 22:00:06.185: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 17 22:00:06.682: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Dec 17 22:00:06.682: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 17 22:00:06.682: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 17 22:00:06.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 17 22:00:07.029: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Dec 17 22:00:07.030: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 17 22:00:07.030: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 17 22:00:07.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 17 22:00:07.432: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Dec 17 22:00:07.432: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 17 22:00:07.432: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 17 22:00:07.432: INFO: Waiting for statefulset status.replicas updated to 0
Dec 17 22:00:07.488: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 1
Dec 17 22:00:17.502: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 17 22:00:17.502: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 17 22:00:17.502: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 17 22:00:17.526: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 17 22:00:17.526: INFO: ss-0  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  }]
Dec 17 22:00:17.526: INFO: ss-1  jerma-server-4b75xjbddvit  Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  }]
Dec 17 22:00:17.526: INFO: ss-2  jerma-node                 Running         [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  }]
Dec 17 22:00:17.526: INFO: 
Dec 17 22:00:17.526: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 17 22:00:19.508: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 17 22:00:19.508: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  }]
Dec 17 22:00:19.509: INFO: ss-1  jerma-server-4b75xjbddvit  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  }]
Dec 17 22:00:19.509: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  }]
Dec 17 22:00:19.509: INFO: 
Dec 17 22:00:19.509: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 17 22:00:20.528: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 17 22:00:20.529: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  }]
Dec 17 22:00:20.529: INFO: ss-1  jerma-server-4b75xjbddvit  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  }]
Dec 17 22:00:20.529: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  }]
Dec 17 22:00:20.529: INFO: 
Dec 17 22:00:20.529: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 17 22:00:21.547: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 17 22:00:21.548: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  }]
Dec 17 22:00:21.548: INFO: ss-1  jerma-server-4b75xjbddvit  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  }]
Dec 17 22:00:21.548: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  }]
Dec 17 22:00:21.548: INFO: 
Dec 17 22:00:21.548: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 17 22:00:22.911: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 17 22:00:22.911: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  }]
Dec 17 22:00:22.912: INFO: ss-1  jerma-server-4b75xjbddvit  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  }]
Dec 17 22:00:22.912: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  }]
Dec 17 22:00:22.912: INFO: 
Dec 17 22:00:22.912: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 17 22:00:23.993: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 17 22:00:23.994: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  }]
Dec 17 22:00:23.994: INFO: ss-1  jerma-server-4b75xjbddvit  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  }]
Dec 17 22:00:23.994: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  }]
Dec 17 22:00:23.994: INFO: 
Dec 17 22:00:23.994: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 17 22:00:25.044: INFO: POD   NODE                       PHASE    GRACE  CONDITIONS
Dec 17 22:00:25.044: INFO: ss-0  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  }]
Dec 17 22:00:25.044: INFO: ss-1  jerma-server-4b75xjbddvit  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  }]
Dec 17 22:00:25.044: INFO: ss-2  jerma-node                 Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  }]
Dec 17 22:00:25.044: INFO: 
Dec 17 22:00:25.044: INFO: StatefulSet ss has not reached scale 0, at 3
Dec 17 22:00:26.054: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 17 22:00:26.054: INFO: ss-0  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  }]
Dec 17 22:00:26.054: INFO: ss-2  jerma-node  Running  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  }]
Dec 17 22:00:26.054: INFO: 
Dec 17 22:00:26.054: INFO: StatefulSet ss has not reached scale 0, at 2
Dec 17 22:00:27.461: INFO: POD   NODE        PHASE    GRACE  CONDITIONS
Dec 17 22:00:27.461: INFO: ss-0  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:33 +0000 UTC  }]
Dec 17 22:00:27.461: INFO: ss-2  jerma-node  Pending  30s    [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  } {Ready False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2019-12-17 22:00:07 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2019-12-17 21:59:54 +0000 UTC  }]
Dec 17 22:00:27.461: INFO: 
Dec 17 22:00:27.461: INFO: StatefulSet ss has not reached scale 0, at 2
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6396
Dec 17 22:00:28.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:00:28.721: INFO: rc: 1
Dec 17 22:00:28.722: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    error: unable to upgrade connection: container not found ("webserver")
 []  0xc001d36d80 exit status 1   true [0xc0011c43b8 0xc0011c43d0 0xc0011c43e8] [0xc0011c43b8 0xc0011c43d0 0xc0011c43e8] [0xc0011c43c8 0xc0011c43e0] [0x10ef580 0x10ef580] 0xc002fc30e0 }:
Command stdout:

stderr:
error: unable to upgrade connection: container not found ("webserver")

error:
exit status 1
Dec 17 22:00:38.723: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:00:38.887: INFO: rc: 1
Dec 17 22:00:38.888: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d36f30 exit status 1   true [0xc0011c4418 0xc0011c4430 0xc0011c4448] [0xc0011c4418 0xc0011c4430 0xc0011c4448] [0xc0011c4428 0xc0011c4440] [0x10ef580 0x10ef580] 0xc002fc3c20 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:00:48.888: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:00:49.159: INFO: rc: 1
Dec 17 22:00:49.159: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc001d36ff0 exit status 1   true [0xc0011c4450 0xc0011c4468 0xc0011c4480] [0xc0011c4450 0xc0011c4468 0xc0011c4480] [0xc0011c4460 0xc0011c4478] [0x10ef580 0x10ef580] 0xc002aae780 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:00:59.160: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:00:59.335: INFO: rc: 1
Dec 17 22:00:59.336: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021fa090 exit status 1   true [0xc000010d80 0xc000011768 0xc0000119d0] [0xc000010d80 0xc000011768 0xc0000119d0] [0xc0000116e0 0xc000011888] [0x10ef580 0x10ef580] 0xc002fc24e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:01:09.337: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:01:09.565: INFO: rc: 1
Dec 17 22:01:09.566: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002a3a090 exit status 1   true [0xc000186000 0xc000186460 0xc0001865d0] [0xc000186000 0xc000186460 0xc0001865d0] [0xc000186360 0xc000186598] [0x10ef580 0x10ef580] 0xc0024029c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:01:19.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:01:19.862: INFO: rc: 1
Dec 17 22:01:19.862: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021fa150 exit status 1   true [0xc000011c18 0xc000011eb8 0xc00083a000] [0xc000011c18 0xc000011eb8 0xc00083a000] [0xc000011e18 0xc000011f50] [0x10ef580 0x10ef580] 0xc002fc28a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:01:29.864: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:01:30.081: INFO: rc: 1
Dec 17 22:01:30.082: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f8a0f0 exit status 1   true [0xc00158e018 0xc00158e1a0 0xc00158e4a8] [0xc00158e018 0xc00158e1a0 0xc00158e4a8] [0xc00158e098 0xc00158e388] [0x10ef580 0x10ef580] 0xc002b96600 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:01:40.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:01:40.292: INFO: rc: 1
Dec 17 22:01:40.292: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011fa0c0 exit status 1   true [0xc000e6c000 0xc000e6c018 0xc000e6c030] [0xc000e6c000 0xc000e6c018 0xc000e6c030] [0xc000e6c010 0xc000e6c028] [0x10ef580 0x10ef580] 0xc0025e0b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:01:50.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:01:50.487: INFO: rc: 1
Dec 17 22:01:50.489: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f8a1b0 exit status 1   true [0xc00158e4f0 0xc00158e850 0xc00158ea90] [0xc00158e4f0 0xc00158e850 0xc00158ea90] [0xc00158e6b8 0xc00158ea08] [0x10ef580 0x10ef580] 0xc002b96e40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:02:00.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:02:00.793: INFO: rc: 1
Dec 17 22:02:00.794: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021fa270 exit status 1   true [0xc00083a098 0xc00083a2f0 0xc00083a5d0] [0xc00083a098 0xc00083a2f0 0xc00083a5d0] [0xc00083a1c8 0xc00083a4d8] [0x10ef580 0x10ef580] 0xc002fc30e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:02:10.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:02:11.031: INFO: rc: 1
Dec 17 22:02:11.031: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011fa1b0 exit status 1   true [0xc000e6c038 0xc000e6c050 0xc000e6c068] [0xc000e6c038 0xc000e6c050 0xc000e6c068] [0xc000e6c048 0xc000e6c060] [0x10ef580 0x10ef580] 0xc00277c480 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:02:21.032: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:02:21.209: INFO: rc: 1
Dec 17 22:02:21.209: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f8a2d0 exit status 1   true [0xc00158ec60 0xc00158f330 0xc00158f7b0] [0xc00158ec60 0xc00158f330 0xc00158f7b0] [0xc00158f1c0 0xc00158f590] [0x10ef580 0x10ef580] 0xc002b97620 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:02:31.210: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:02:31.388: INFO: rc: 1
Dec 17 22:02:31.388: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011fa270 exit status 1   true [0xc000e6c070 0xc000e6c088 0xc000e6c0a0] [0xc000e6c070 0xc000e6c088 0xc000e6c0a0] [0xc000e6c080 0xc000e6c098] [0x10ef580 0x10ef580] 0xc002876120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:02:41.389: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:02:41.602: INFO: rc: 1
Dec 17 22:02:41.603: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011fa390 exit status 1   true [0xc000e6c0a8 0xc000e6c0c0 0xc000e6c0d8] [0xc000e6c0a8 0xc000e6c0c0 0xc000e6c0d8] [0xc000e6c0b8 0xc000e6c0d0] [0x10ef580 0x10ef580] 0xc0028775c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:02:51.604: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:02:51.866: INFO: rc: 1
Dec 17 22:02:51.867: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f8a480 exit status 1   true [0xc00158f818 0xc00158f940 0xc00158fb00] [0xc00158f818 0xc00158f940 0xc00158fb00] [0xc00158f910 0xc00158fab8] [0x10ef580 0x10ef580] 0xc002948120 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:03:01.870: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:03:02.115: INFO: rc: 1
Dec 17 22:03:02.116: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021fa0c0 exit status 1   true [0xc000010d80 0xc000011768 0xc0000119d0] [0xc000010d80 0xc000011768 0xc0000119d0] [0xc0000116e0 0xc000011888] [0x10ef580 0x10ef580] 0xc00277d680 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:03:12.117: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:03:12.328: INFO: rc: 1
Dec 17 22:03:12.328: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021fa1b0 exit status 1   true [0xc000011c18 0xc000011eb8 0xc00083a000] [0xc000011c18 0xc000011eb8 0xc00083a000] [0xc000011e18 0xc000011f50] [0x10ef580 0x10ef580] 0xc0025e10e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:03:22.329: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:03:22.454: INFO: rc: 1
Dec 17 22:03:22.454: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011fa120 exit status 1   true [0xc000e6c000 0xc000e6c018 0xc000e6c030] [0xc000e6c000 0xc000e6c018 0xc000e6c030] [0xc000e6c010 0xc000e6c028] [0x10ef580 0x10ef580] 0xc002b963c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:03:32.456: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:03:32.710: INFO: rc: 1
Dec 17 22:03:32.711: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f8a090 exit status 1   true [0xc00158e018 0xc00158e1a0 0xc00158e4a8] [0xc00158e018 0xc00158e1a0 0xc00158e4a8] [0xc00158e098 0xc00158e388] [0x10ef580 0x10ef580] 0xc002fc24e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:03:42.712: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:03:42.842: INFO: rc: 1
Dec 17 22:03:42.843: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f8a1e0 exit status 1   true [0xc00158e4f0 0xc00158e850 0xc00158ea90] [0xc00158e4f0 0xc00158e850 0xc00158ea90] [0xc00158e6b8 0xc00158ea08] [0x10ef580 0x10ef580] 0xc002fc28a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:03:52.844: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:03:53.012: INFO: rc: 1
Dec 17 22:03:53.013: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f8a300 exit status 1   true [0xc00158ec60 0xc00158f330 0xc00158f7b0] [0xc00158ec60 0xc00158f330 0xc00158f7b0] [0xc00158f1c0 0xc00158f590] [0x10ef580 0x10ef580] 0xc002fc30e0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:04:03.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:04:03.210: INFO: rc: 1
Dec 17 22:04:03.211: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f8a420 exit status 1   true [0xc00158f818 0xc00158f940 0xc00158fb00] [0xc00158f818 0xc00158f940 0xc00158fb00] [0xc00158f910 0xc00158fab8] [0x10ef580 0x10ef580] 0xc002fc37a0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:04:13.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:04:13.419: INFO: rc: 1
Dec 17 22:04:13.419: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011fa2d0 exit status 1   true [0xc000e6c038 0xc000e6c050 0xc000e6c068] [0xc000e6c038 0xc000e6c050 0xc000e6c068] [0xc000e6c048 0xc000e6c060] [0x10ef580 0x10ef580] 0xc002b96ae0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:04:23.421: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:04:23.612: INFO: rc: 1
Dec 17 22:04:23.613: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021fa330 exit status 1   true [0xc00083a098 0xc00083a2f0 0xc00083a5d0] [0xc00083a098 0xc00083a2f0 0xc00083a5d0] [0xc00083a1c8 0xc00083a4d8] [0x10ef580 0x10ef580] 0xc0028769c0 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:04:33.614: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:04:33.787: INFO: rc: 1
Dec 17 22:04:33.788: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002a3a2d0 exit status 1   true [0xc000186000 0xc000186460 0xc0001865d0] [0xc000186000 0xc000186460 0xc0001865d0] [0xc000186360 0xc000186598] [0x10ef580 0x10ef580] 0xc002949320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:04:43.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:04:44.140: INFO: rc: 1
Dec 17 22:04:44.141: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc002a3a480 exit status 1   true [0xc00299a020 0xc00299a130 0xc00299a170] [0xc00299a020 0xc00299a130 0xc00299a170] [0xc00299a098 0xc00299a160] [0x10ef580 0x10ef580] 0xc002402720 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:04:54.143: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:04:54.306: INFO: rc: 1
Dec 17 22:04:54.307: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021fa420 exit status 1   true [0xc00083a678 0xc00083a890 0xc00083ab90] [0xc00083a678 0xc00083a890 0xc00083ab90] [0xc00083a7d0 0xc00083ab18] [0x10ef580 0x10ef580] 0xc0020eae40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:05:04.308: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:05:04.487: INFO: rc: 1
Dec 17 22:05:04.487: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc000f8a0c0 exit status 1   true [0xc000186280 0xc000186520 0xc0001866f0] [0xc000186280 0xc000186520 0xc0001866f0] [0xc000186460 0xc0001865d0] [0x10ef580 0x10ef580] 0xc002949320 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:05:14.490: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:05:14.681: INFO: rc: 1
Dec 17 22:05:14.682: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0011fa0f0 exit status 1   true [0xc000010550 0xc0000116e0 0xc000011888] [0xc000010550 0xc0000116e0 0xc000011888] [0xc000011668 0xc000011868] [0x10ef580 0x10ef580] 0xc002877200 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:05:24.683: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:05:24.832: INFO: rc: 1
Dec 17 22:05:24.832: INFO: Waiting 10s to retry failed RunHostCmd: error running &{/usr/local/bin/kubectl [kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true] []    Error from server (NotFound): pods "ss-0" not found
 []  0xc0021fa090 exit status 1   true [0xc00158e018 0xc00158e1a0 0xc00158e4a8] [0xc00158e018 0xc00158e1a0 0xc00158e4a8] [0xc00158e098 0xc00158e388] [0x10ef580 0x10ef580] 0xc0025e0b40 }:
Command stdout:

stderr:
Error from server (NotFound): pods "ss-0" not found

error:
exit status 1
Dec 17 22:05:34.833: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6396 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 22:05:34.980: INFO: rc: 1
Dec 17 22:05:34.981: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: 
Dec 17 22:05:34.981: INFO: Scaling statefulset ss to 0
Dec 17 22:05:34.991: INFO: Waiting for statefulset status.replicas updated to 0
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
Dec 17 22:05:34.994: INFO: Deleting all statefulset in ns statefulset-6396
Dec 17 22:05:34.996: INFO: Scaling statefulset ss to 0
Dec 17 22:05:35.006: INFO: Waiting for statefulset status.replicas updated to 0
Dec 17 22:05:35.008: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:05:35.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-6396" for this suite.
Dec 17 22:05:41.108: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:05:41.271: INFO: namespace statefulset-6396 deletion completed in 6.236895506s

• [SLOW TEST:367.839 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
    Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:05:41.272: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name configmap-test-volume-cf362d9f-fab3-4d40-827d-f55e1840173b
STEP: Creating a pod to test consume configMaps
Dec 17 22:05:41.484: INFO: Waiting up to 5m0s for pod "pod-configmaps-e6c18b74-88c4-4c76-8b3f-c41429339375" in namespace "configmap-2462" to be "success or failure"
Dec 17 22:05:41.559: INFO: Pod "pod-configmaps-e6c18b74-88c4-4c76-8b3f-c41429339375": Phase="Pending", Reason="", readiness=false. Elapsed: 74.435391ms
Dec 17 22:05:43.572: INFO: Pod "pod-configmaps-e6c18b74-88c4-4c76-8b3f-c41429339375": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087962784s
Dec 17 22:05:45.586: INFO: Pod "pod-configmaps-e6c18b74-88c4-4c76-8b3f-c41429339375": Phase="Pending", Reason="", readiness=false. Elapsed: 4.102171104s
Dec 17 22:05:47.601: INFO: Pod "pod-configmaps-e6c18b74-88c4-4c76-8b3f-c41429339375": Phase="Pending", Reason="", readiness=false. Elapsed: 6.116420602s
Dec 17 22:05:49.607: INFO: Pod "pod-configmaps-e6c18b74-88c4-4c76-8b3f-c41429339375": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.122623989s
STEP: Saw pod success
Dec 17 22:05:49.607: INFO: Pod "pod-configmaps-e6c18b74-88c4-4c76-8b3f-c41429339375" satisfied condition "success or failure"
Dec 17 22:05:49.611: INFO: Trying to get logs from node jerma-node pod pod-configmaps-e6c18b74-88c4-4c76-8b3f-c41429339375 container configmap-volume-test: 
STEP: delete the pod
Dec 17 22:05:49.771: INFO: Waiting for pod pod-configmaps-e6c18b74-88c4-4c76-8b3f-c41429339375 to disappear
Dec 17 22:05:49.784: INFO: Pod pod-configmaps-e6c18b74-88c4-4c76-8b3f-c41429339375 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:05:49.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2462" for this suite.
Dec 17 22:05:55.837: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:05:55.983: INFO: namespace configmap-2462 deletion completed in 6.189673938s

• [SLOW TEST:14.710 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[sig-node] Downward API 
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:05:55.983: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward api env vars
Dec 17 22:05:56.147: INFO: Waiting up to 5m0s for pod "downward-api-c79399d0-58ae-43b0-ab5d-1245cc4a4984" in namespace "downward-api-5391" to be "success or failure"
Dec 17 22:05:56.165: INFO: Pod "downward-api-c79399d0-58ae-43b0-ab5d-1245cc4a4984": Phase="Pending", Reason="", readiness=false. Elapsed: 18.248935ms
Dec 17 22:05:58.220: INFO: Pod "downward-api-c79399d0-58ae-43b0-ab5d-1245cc4a4984": Phase="Pending", Reason="", readiness=false. Elapsed: 2.073444419s
Dec 17 22:06:00.266: INFO: Pod "downward-api-c79399d0-58ae-43b0-ab5d-1245cc4a4984": Phase="Pending", Reason="", readiness=false. Elapsed: 4.119550526s
Dec 17 22:06:02.284: INFO: Pod "downward-api-c79399d0-58ae-43b0-ab5d-1245cc4a4984": Phase="Pending", Reason="", readiness=false. Elapsed: 6.137717788s
Dec 17 22:06:04.291: INFO: Pod "downward-api-c79399d0-58ae-43b0-ab5d-1245cc4a4984": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.144232232s
STEP: Saw pod success
Dec 17 22:06:04.291: INFO: Pod "downward-api-c79399d0-58ae-43b0-ab5d-1245cc4a4984" satisfied condition "success or failure"
Dec 17 22:06:04.295: INFO: Trying to get logs from node jerma-node pod downward-api-c79399d0-58ae-43b0-ab5d-1245cc4a4984 container dapi-container: 
STEP: delete the pod
Dec 17 22:06:04.418: INFO: Waiting for pod downward-api-c79399d0-58ae-43b0-ab5d-1245cc4a4984 to disappear
Dec 17 22:06:04.426: INFO: Pod downward-api-c79399d0-58ae-43b0-ab5d-1245cc4a4984 no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:06:04.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-5391" for this suite.
Dec 17 22:06:10.464: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:06:10.612: INFO: namespace downward-api-5391 deletion completed in 6.175357458s

• [SLOW TEST:14.630 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod UID as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:06:10.614: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 17 22:06:10.708: INFO: Waiting up to 5m0s for pod "pod-0e278e8c-f068-42a5-93f7-d56b3cfbde05" in namespace "emptydir-5646" to be "success or failure"
Dec 17 22:06:10.791: INFO: Pod "pod-0e278e8c-f068-42a5-93f7-d56b3cfbde05": Phase="Pending", Reason="", readiness=false. Elapsed: 82.991991ms
Dec 17 22:06:12.800: INFO: Pod "pod-0e278e8c-f068-42a5-93f7-d56b3cfbde05": Phase="Pending", Reason="", readiness=false. Elapsed: 2.091410223s
Dec 17 22:06:14.807: INFO: Pod "pod-0e278e8c-f068-42a5-93f7-d56b3cfbde05": Phase="Pending", Reason="", readiness=false. Elapsed: 4.0987589s
Dec 17 22:06:16.817: INFO: Pod "pod-0e278e8c-f068-42a5-93f7-d56b3cfbde05": Phase="Pending", Reason="", readiness=false. Elapsed: 6.108941855s
Dec 17 22:06:18.830: INFO: Pod "pod-0e278e8c-f068-42a5-93f7-d56b3cfbde05": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.122135274s
STEP: Saw pod success
Dec 17 22:06:18.831: INFO: Pod "pod-0e278e8c-f068-42a5-93f7-d56b3cfbde05" satisfied condition "success or failure"
Dec 17 22:06:18.835: INFO: Trying to get logs from node jerma-node pod pod-0e278e8c-f068-42a5-93f7-d56b3cfbde05 container test-container: 
STEP: delete the pod
Dec 17 22:06:18.908: INFO: Waiting for pod pod-0e278e8c-f068-42a5-93f7-d56b3cfbde05 to disappear
Dec 17 22:06:18.915: INFO: Pod pod-0e278e8c-f068-42a5-93f7-d56b3cfbde05 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:06:18.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-5646" for this suite.
Dec 17 22:06:24.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:06:25.074: INFO: namespace emptydir-5646 deletion completed in 6.152975122s

• [SLOW TEST:14.460 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:06:25.075: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
Dec 17 22:06:25.156: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 17 22:06:25.183: INFO: Waiting for terminating namespaces to be deleted...
Dec 17 22:06:25.186: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Dec 17 22:06:25.192: INFO: weave-net-srfjj from kube-system started at 2019-12-17 21:23:16 +0000 UTC (2 container statuses recorded)
Dec 17 22:06:25.192: INFO: 	Container weave ready: true, restart count 0
Dec 17 22:06:25.192: INFO: 	Container weave-npc ready: true, restart count 0
Dec 17 22:06:25.192: INFO: kube-proxy-jcjl4 from kube-system started at 2019-10-12 13:47:49 +0000 UTC (1 container statuses recorded)
Dec 17 22:06:25.192: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 17 22:06:25.192: INFO: 
Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit before test
Dec 17 22:06:25.215: INFO: kube-proxy-bdcvr from kube-system started at 2019-12-13 09:08:20 +0000 UTC (1 container statuses recorded)
Dec 17 22:06:25.215: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 17 22:06:25.215: INFO: kube-scheduler-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:42 +0000 UTC (1 container statuses recorded)
Dec 17 22:06:25.215: INFO: 	Container kube-scheduler ready: true, restart count 11
Dec 17 22:06:25.215: INFO: coredns-5644d7b6d9-xvlxj from kube-system started at 2019-12-14 16:49:52 +0000 UTC (1 container statuses recorded)
Dec 17 22:06:25.215: INFO: 	Container coredns ready: true, restart count 0
Dec 17 22:06:25.215: INFO: etcd-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:37 +0000 UTC (1 container statuses recorded)
Dec 17 22:06:25.215: INFO: 	Container etcd ready: true, restart count 1
Dec 17 22:06:25.215: INFO: kube-controller-manager-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:40 +0000 UTC (1 container statuses recorded)
Dec 17 22:06:25.215: INFO: 	Container kube-controller-manager ready: true, restart count 8
Dec 17 22:06:25.215: INFO: coredns-5644d7b6d9-n9kkw from kube-system started at 2019-11-10 16:39:08 +0000 UTC (0 container statuses recorded)
Dec 17 22:06:25.216: INFO: kube-apiserver-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:38 +0000 UTC (1 container statuses recorded)
Dec 17 22:06:25.216: INFO: 	Container kube-apiserver ready: true, restart count 1
Dec 17 22:06:25.216: INFO: weave-net-gsjjk from kube-system started at 2019-12-13 09:16:56 +0000 UTC (2 container statuses recorded)
Dec 17 22:06:25.216: INFO: 	Container weave ready: true, restart count 0
Dec 17 22:06:25.216: INFO: 	Container weave-npc ready: true, restart count 0
Dec 17 22:06:25.216: INFO: coredns-5644d7b6d9-rqwzj from kube-system started at 2019-11-10 18:03:38 +0000 UTC (0 container statuses recorded)
Dec 17 22:06:25.216: INFO: coredns-5644d7b6d9-9sj58 from kube-system started at 2019-12-14 15:12:12 +0000 UTC (1 container statuses recorded)
Dec 17 22:06:25.216: INFO: 	Container coredns ready: true, restart count 0
[It] validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: verifying the node has the label node jerma-node
STEP: verifying the node has the label node jerma-server-4b75xjbddvit
Dec 17 22:06:25.330: INFO: Pod coredns-5644d7b6d9-9sj58 requesting resource cpu=100m on Node jerma-server-4b75xjbddvit
Dec 17 22:06:25.330: INFO: Pod coredns-5644d7b6d9-xvlxj requesting resource cpu=100m on Node jerma-server-4b75xjbddvit
Dec 17 22:06:25.330: INFO: Pod etcd-jerma-server-4b75xjbddvit requesting resource cpu=0m on Node jerma-server-4b75xjbddvit
Dec 17 22:06:25.330: INFO: Pod kube-apiserver-jerma-server-4b75xjbddvit requesting resource cpu=250m on Node jerma-server-4b75xjbddvit
Dec 17 22:06:25.330: INFO: Pod kube-controller-manager-jerma-server-4b75xjbddvit requesting resource cpu=200m on Node jerma-server-4b75xjbddvit
Dec 17 22:06:25.330: INFO: Pod kube-proxy-bdcvr requesting resource cpu=0m on Node jerma-server-4b75xjbddvit
Dec 17 22:06:25.330: INFO: Pod kube-proxy-jcjl4 requesting resource cpu=0m on Node jerma-node
Dec 17 22:06:25.330: INFO: Pod kube-scheduler-jerma-server-4b75xjbddvit requesting resource cpu=100m on Node jerma-server-4b75xjbddvit
Dec 17 22:06:25.330: INFO: Pod weave-net-gsjjk requesting resource cpu=20m on Node jerma-server-4b75xjbddvit
Dec 17 22:06:25.330: INFO: Pod weave-net-srfjj requesting resource cpu=20m on Node jerma-node
STEP: Starting Pods to consume most of the cluster CPU.
Dec 17 22:06:25.330: INFO: Creating a pod which consumes cpu=2786m on Node jerma-node
Dec 17 22:06:25.342: INFO: Creating a pod which consumes cpu=2261m on Node jerma-server-4b75xjbddvit
STEP: Creating another pod that requires unavailable amount of CPU.
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-32b0dcf3-66b2-477e-9b1e-4358510c42e0.15e147e323bd9d46], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5800/filler-pod-32b0dcf3-66b2-477e-9b1e-4358510c42e0 to jerma-server-4b75xjbddvit]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-32b0dcf3-66b2-477e-9b1e-4358510c42e0.15e147e44aa5ba57], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-32b0dcf3-66b2-477e-9b1e-4358510c42e0.15e147e540272e9c], Reason = [Created], Message = [Created container filler-pod-32b0dcf3-66b2-477e-9b1e-4358510c42e0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-32b0dcf3-66b2-477e-9b1e-4358510c42e0.15e147e55f960dea], Reason = [Started], Message = [Started container filler-pod-32b0dcf3-66b2-477e-9b1e-4358510c42e0]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4dd8be0c-01c1-4b62-b43f-284df56a7cb6.15e147e32136b2a4], Reason = [Scheduled], Message = [Successfully assigned sched-pred-5800/filler-pod-4dd8be0c-01c1-4b62-b43f-284df56a7cb6 to jerma-node]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4dd8be0c-01c1-4b62-b43f-284df56a7cb6.15e147e41d5d4ef8], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4dd8be0c-01c1-4b62-b43f-284df56a7cb6.15e147e50e3d7e16], Reason = [Created], Message = [Created container filler-pod-4dd8be0c-01c1-4b62-b43f-284df56a7cb6]
STEP: Considering event: 
Type = [Normal], Name = [filler-pod-4dd8be0c-01c1-4b62-b43f-284df56a7cb6.15e147e52d223b12], Reason = [Started], Message = [Started container filler-pod-4dd8be0c-01c1-4b62-b43f-284df56a7cb6]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e147e57852ba11], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: Considering event: 
Type = [Warning], Name = [additional-pod.15e147e57c154c32], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 Insufficient cpu.]
STEP: removing the label node off the node jerma-node
STEP: verifying the node doesn't have the label node
STEP: removing the label node off the node jerma-server-4b75xjbddvit
STEP: verifying the node doesn't have the label node
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:06:36.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-5800" for this suite.
Dec 17 22:06:44.802: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:06:45.051: INFO: namespace sched-pred-5800 deletion completed in 8.384135997s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78

• [SLOW TEST:19.978 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates resource limits of pods that are allowed to run  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:06:45.057: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 17 22:06:47.501: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 17 22:06:49.524: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217207, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217207, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217207, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217207, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:06:51.532: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217207, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217207, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217207, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217207, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:06:53.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217207, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217207, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217207, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217207, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 17 22:06:56.607: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Registering the crd webhook via the AdmissionRegistration API
STEP: Creating a custom resource definition that should be denied by the webhook
Dec 17 22:06:56.685: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:06:56.741: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5750" for this suite.
Dec 17 22:07:02.767: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:07:02.842: INFO: namespace webhook-5750 deletion completed in 6.092353512s
STEP: Destroying namespace "webhook-5750-markers" for this suite.
Dec 17 22:07:08.871: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:07:08.985: INFO: namespace webhook-5750-markers deletion completed in 6.143622913s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:23.951 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should deny crd creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:07:09.010: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-131.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-131.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-131.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-131.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-131.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-131.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 17 22:07:21.378: INFO: Unable to read wheezy_udp@PodARecord from pod dns-131/dns-test-7a8ee3c3-2d8e-4d1f-b22c-20b66564d59d: the server could not find the requested resource (get pods dns-test-7a8ee3c3-2d8e-4d1f-b22c-20b66564d59d)
Dec 17 22:07:21.384: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-131/dns-test-7a8ee3c3-2d8e-4d1f-b22c-20b66564d59d: the server could not find the requested resource (get pods dns-test-7a8ee3c3-2d8e-4d1f-b22c-20b66564d59d)
Dec 17 22:07:21.393: INFO: Unable to read jessie_hosts@dns-querier-2.dns-test-service-2.dns-131.svc.cluster.local from pod dns-131/dns-test-7a8ee3c3-2d8e-4d1f-b22c-20b66564d59d: the server could not find the requested resource (get pods dns-test-7a8ee3c3-2d8e-4d1f-b22c-20b66564d59d)
Dec 17 22:07:21.398: INFO: Unable to read jessie_hosts@dns-querier-2 from pod dns-131/dns-test-7a8ee3c3-2d8e-4d1f-b22c-20b66564d59d: the server could not find the requested resource (get pods dns-test-7a8ee3c3-2d8e-4d1f-b22c-20b66564d59d)
Dec 17 22:07:21.403: INFO: Unable to read jessie_udp@PodARecord from pod dns-131/dns-test-7a8ee3c3-2d8e-4d1f-b22c-20b66564d59d: the server could not find the requested resource (get pods dns-test-7a8ee3c3-2d8e-4d1f-b22c-20b66564d59d)
Dec 17 22:07:21.408: INFO: Unable to read jessie_tcp@PodARecord from pod dns-131/dns-test-7a8ee3c3-2d8e-4d1f-b22c-20b66564d59d: the server could not find the requested resource (get pods dns-test-7a8ee3c3-2d8e-4d1f-b22c-20b66564d59d)
Dec 17 22:07:21.408: INFO: Lookups using dns-131/dns-test-7a8ee3c3-2d8e-4d1f-b22c-20b66564d59d failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-2.dns-test-service-2.dns-131.svc.cluster.local jessie_hosts@dns-querier-2 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 17 22:07:26.548: INFO: DNS probes using dns-131/dns-test-7a8ee3c3-2d8e-4d1f-b22c-20b66564d59d succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:07:26.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-131" for this suite.
Dec 17 22:07:33.085: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:07:33.241: INFO: namespace dns-131 deletion completed in 6.352019217s

• [SLOW TEST:24.232 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Hostname [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:07:33.249: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating secret with name projected-secret-test-7733a545-4a1b-4311-98ca-c1d50819ba07
STEP: Creating a pod to test consume secrets
Dec 17 22:07:33.400: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-49402ac7-ae77-40df-b6ea-607f06379e29" in namespace "projected-796" to be "success or failure"
Dec 17 22:07:33.412: INFO: Pod "pod-projected-secrets-49402ac7-ae77-40df-b6ea-607f06379e29": Phase="Pending", Reason="", readiness=false. Elapsed: 12.079227ms
Dec 17 22:07:35.460: INFO: Pod "pod-projected-secrets-49402ac7-ae77-40df-b6ea-607f06379e29": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060103481s
Dec 17 22:07:37.475: INFO: Pod "pod-projected-secrets-49402ac7-ae77-40df-b6ea-607f06379e29": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07504564s
Dec 17 22:07:39.489: INFO: Pod "pod-projected-secrets-49402ac7-ae77-40df-b6ea-607f06379e29": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089472512s
Dec 17 22:07:41.561: INFO: Pod "pod-projected-secrets-49402ac7-ae77-40df-b6ea-607f06379e29": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.161194522s
STEP: Saw pod success
Dec 17 22:07:41.561: INFO: Pod "pod-projected-secrets-49402ac7-ae77-40df-b6ea-607f06379e29" satisfied condition "success or failure"
Dec 17 22:07:41.567: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-49402ac7-ae77-40df-b6ea-607f06379e29 container secret-volume-test: 
STEP: delete the pod
Dec 17 22:07:41.649: INFO: Waiting for pod pod-projected-secrets-49402ac7-ae77-40df-b6ea-607f06379e29 to disappear
Dec 17 22:07:41.757: INFO: Pod pod-projected-secrets-49402ac7-ae77-40df-b6ea-607f06379e29 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:07:41.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-796" for this suite.
Dec 17 22:07:47.803: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:07:47.995: INFO: namespace projected-796 deletion completed in 6.22883335s

• [SLOW TEST:14.746 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:07:47.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
[It] RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 22:07:48.152: INFO: Creating deployment "test-recreate-deployment"
Dec 17 22:07:48.167: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1
Dec 17 22:07:48.280: INFO: deployment "test-recreate-deployment" doesn't have the required revision set
Dec 17 22:07:50.397: INFO: Waiting deployment "test-recreate-deployment" to complete
Dec 17 22:07:50.404: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217268, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217268, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217268, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217268, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-68fc85c7bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:07:52.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217268, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217268, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217268, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217268, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-68fc85c7bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:07:54.418: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217268, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217268, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217268, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217268, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-68fc85c7bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:07:56.412: INFO: Triggering a new rollout for deployment "test-recreate-deployment"
Dec 17 22:07:56.433: INFO: Updating deployment test-recreate-deployment
Dec 17 22:07:56.433: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62
Dec 17 22:07:58.240: INFO: Deployment "test-recreate-deployment":
&Deployment{ObjectMeta:{test-recreate-deployment  deployment-1441 /apis/apps/v1/namespaces/deployment-1441/deployments/test-recreate-deployment 05be445b-69bb-44db-83cf-2a32b912f0e9 9146895 2 2019-12-17 22:07:48 +0000 UTC   map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc004a85228  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2019-12-17 22:07:58 +0000 UTC,LastTransitionTime:2019-12-17 22:07:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2019-12-17 22:07:58 +0000 UTC,LastTransitionTime:2019-12-17 22:07:48 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},}

Dec 17 22:07:58.248: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment":
&ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff  deployment-1441 /apis/apps/v1/namespaces/deployment-1441/replicasets/test-recreate-deployment-5f94c574ff a568498e-9631-4d37-89ac-36f00293c2bc 9146892 1 2019-12-17 22:07:56 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment 05be445b-69bb-44db-83cf-2a32b912f0e9 0xc002b0fd37 0xc002b0fd38}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b0fd98  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Dec 17 22:07:58.248: INFO: All old ReplicaSets of Deployment "test-recreate-deployment":
Dec 17 22:07:58.248: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-68fc85c7bb  deployment-1441 /apis/apps/v1/namespaces/deployment-1441/replicasets/test-recreate-deployment-68fc85c7bb f3623c73-098b-41ab-9104-d77c6c086379 9146883 2 2019-12-17 22:07:48 +0000 UTC   map[name:sample-pod-3 pod-template-hash:68fc85c7bb] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment 05be445b-69bb-44db-83cf-2a32b912f0e9 0xc002b0fe07 0xc002b0fe08}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 68fc85c7bb,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:sample-pod-3 pod-template-hash:68fc85c7bb] map[] [] []  []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002b0fe68  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Dec 17 22:07:58.255: INFO: Pod "test-recreate-deployment-5f94c574ff-9bx48" is not available:
&Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-9bx48 test-recreate-deployment-5f94c574ff- deployment-1441 /api/v1/namespaces/deployment-1441/pods/test-recreate-deployment-5f94c574ff-9bx48 65445649-70d4-4e43-b7b0-eeac918d547f 9146890 0 2019-12-17 22:07:57 +0000 UTC   map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff a568498e-9631-4d37-89ac-36f00293c2bc 0xc0030562e7 0xc0030562e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-vg6np,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-vg6np,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-vg6np,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 22:07:57 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:07:58.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1441" for this suite.
Dec 17 22:08:06.307: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:08:06.442: INFO: namespace deployment-1441 deletion completed in 8.178637858s

• [SLOW TEST:18.446 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  RecreateDeployment should delete old pods and create new ones [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:08:06.444: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the container
STEP: wait for the container to reach Failed
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 17 22:08:15.981: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:08:16.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5319" for this suite.
Dec 17 22:08:22.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:08:22.156: INFO: namespace container-runtime-5319 deletion completed in 6.132824674s

• [SLOW TEST:15.714 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132
      should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:08:22.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 17 22:08:22.759: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 17 22:08:24.780: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217302, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217302, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217302, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217302, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:08:26.811: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217302, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217302, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217302, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217302, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:08:28.792: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217302, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217302, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217302, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217302, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 17 22:08:31.829: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API
STEP: create a namespace for the webhook
STEP: create a configmap should be unconditionally rejected by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:08:32.024: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-8878" for this suite.
Dec 17 22:08:38.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:08:38.174: INFO: namespace webhook-8878 deletion completed in 6.144612681s
STEP: Destroying namespace "webhook-8878-markers" for this suite.
Dec 17 22:08:44.417: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:08:44.544: INFO: namespace webhook-8878-markers deletion completed in 6.369983101s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:22.403 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should unconditionally reject operations on fail closed webhook [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSS
------------------------------
[k8s.io] Pods 
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:08:44.564: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
[It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 22:08:44.617: INFO: >>> kubeConfig: /root/.kube/config
STEP: creating the pod
STEP: submitting the pod to kubernetes
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:08:50.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3733" for this suite.
Dec 17 22:09:38.764: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:09:38.886: INFO: namespace pods-3733 deletion completed in 48.145285998s

• [SLOW TEST:54.323 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should support retrieving logs from the container over websockets [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:09:38.887: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should support proxy with --port 0  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: starting the proxy server
Dec 17 22:09:38.949: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter'
STEP: curling proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:09:39.082: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3997" for this suite.
Dec 17 22:09:45.117: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:09:45.234: INFO: namespace kubectl-3997 deletion completed in 6.143217801s

• [SLOW TEST:6.347 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1782
    should support proxy with --port 0  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:09:45.235: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name configmap-test-volume-8b29c1e9-3592-448e-bf48-607d3c169261
STEP: Creating a pod to test consume configMaps
Dec 17 22:09:45.323: INFO: Waiting up to 5m0s for pod "pod-configmaps-8f4cf14f-2325-4046-b4ed-d48c67615172" in namespace "configmap-788" to be "success or failure"
Dec 17 22:09:45.410: INFO: Pod "pod-configmaps-8f4cf14f-2325-4046-b4ed-d48c67615172": Phase="Pending", Reason="", readiness=false. Elapsed: 87.504007ms
Dec 17 22:09:47.423: INFO: Pod "pod-configmaps-8f4cf14f-2325-4046-b4ed-d48c67615172": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100320121s
Dec 17 22:09:49.437: INFO: Pod "pod-configmaps-8f4cf14f-2325-4046-b4ed-d48c67615172": Phase="Pending", Reason="", readiness=false. Elapsed: 4.113853607s
Dec 17 22:09:51.443: INFO: Pod "pod-configmaps-8f4cf14f-2325-4046-b4ed-d48c67615172": Phase="Pending", Reason="", readiness=false. Elapsed: 6.119679902s
Dec 17 22:09:53.453: INFO: Pod "pod-configmaps-8f4cf14f-2325-4046-b4ed-d48c67615172": Phase="Pending", Reason="", readiness=false. Elapsed: 8.130381675s
Dec 17 22:09:55.464: INFO: Pod "pod-configmaps-8f4cf14f-2325-4046-b4ed-d48c67615172": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.141080773s
STEP: Saw pod success
Dec 17 22:09:55.464: INFO: Pod "pod-configmaps-8f4cf14f-2325-4046-b4ed-d48c67615172" satisfied condition "success or failure"
Dec 17 22:09:55.469: INFO: Trying to get logs from node jerma-node pod pod-configmaps-8f4cf14f-2325-4046-b4ed-d48c67615172 container configmap-volume-test: 
STEP: delete the pod
Dec 17 22:09:55.722: INFO: Waiting for pod pod-configmaps-8f4cf14f-2325-4046-b4ed-d48c67615172 to disappear
Dec 17 22:09:55.745: INFO: Pod pod-configmaps-8f4cf14f-2325-4046-b4ed-d48c67615172 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:09:55.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-788" for this suite.
Dec 17 22:10:01.787: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:10:01.981: INFO: namespace configmap-788 deletion completed in 6.22669967s

• [SLOW TEST:16.746 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
  should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:10:01.981: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 17 22:10:03.281: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 17 22:10:05.297: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217403, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217403, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217403, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217403, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:10:07.367: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217403, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217403, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217403, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217403, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:10:09.372: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217403, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217403, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217403, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217403, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 17 22:10:12.374: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 22:10:12.383: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-9424-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource while v1 is storage version
STEP: Patching Custom Resource Definition to set v2 as storage
STEP: Patching the custom resource while v2 is storage version
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:10:14.193: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-565" for this suite.
Dec 17 22:10:20.261: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:10:20.412: INFO: namespace webhook-565 deletion completed in 6.208752805s
STEP: Destroying namespace "webhook-565-markers" for this suite.
Dec 17 22:10:26.441: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:10:26.665: INFO: namespace webhook-565-markers deletion completed in 6.252562477s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:24.701 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with different stored version [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[sig-cli] Kubectl client Update Demo 
  should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:10:26.683: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should do a rolling update of a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the initial replication controller
Dec 17 22:10:26.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-163'
Dec 17 22:10:29.667: INFO: stderr: ""
Dec 17 22:10:29.668: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 17 22:10:29.669: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-163'
Dec 17 22:10:29.890: INFO: stderr: ""
Dec 17 22:10:29.890: INFO: stdout: "update-demo-nautilus-chg6b update-demo-nautilus-nv57v "
Dec 17 22:10:29.891: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-chg6b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-163'
Dec 17 22:10:30.070: INFO: stderr: ""
Dec 17 22:10:30.071: INFO: stdout: ""
Dec 17 22:10:30.071: INFO: update-demo-nautilus-chg6b is created but not running
Dec 17 22:10:35.072: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-163'
Dec 17 22:10:36.394: INFO: stderr: ""
Dec 17 22:10:36.394: INFO: stdout: "update-demo-nautilus-chg6b update-demo-nautilus-nv57v "
Dec 17 22:10:36.395: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-chg6b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-163'
Dec 17 22:10:38.433: INFO: stderr: ""
Dec 17 22:10:38.433: INFO: stdout: ""
Dec 17 22:10:38.433: INFO: update-demo-nautilus-chg6b is created but not running
Dec 17 22:10:43.434: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-163'
Dec 17 22:10:43.645: INFO: stderr: ""
Dec 17 22:10:43.645: INFO: stdout: "update-demo-nautilus-chg6b update-demo-nautilus-nv57v "
Dec 17 22:10:43.646: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-chg6b -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-163'
Dec 17 22:10:43.762: INFO: stderr: ""
Dec 17 22:10:43.762: INFO: stdout: "true"
Dec 17 22:10:43.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-chg6b -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-163'
Dec 17 22:10:43.995: INFO: stderr: ""
Dec 17 22:10:43.995: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 17 22:10:43.995: INFO: validating pod update-demo-nautilus-chg6b
Dec 17 22:10:44.078: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 17 22:10:44.078: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 17 22:10:44.078: INFO: update-demo-nautilus-chg6b is verified up and running
Dec 17 22:10:44.079: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nv57v -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-163'
Dec 17 22:10:44.170: INFO: stderr: ""
Dec 17 22:10:44.170: INFO: stdout: "true"
Dec 17 22:10:44.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-nv57v -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-163'
Dec 17 22:10:44.252: INFO: stderr: ""
Dec 17 22:10:44.252: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 17 22:10:44.252: INFO: validating pod update-demo-nautilus-nv57v
Dec 17 22:10:44.261: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 17 22:10:44.262: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 17 22:10:44.262: INFO: update-demo-nautilus-nv57v is verified up and running
STEP: rolling-update to new replication controller
Dec 17 22:10:44.268: INFO: scanned /root for discovery docs: 
Dec 17 22:10:44.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-163'
Dec 17 22:11:12.885: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 17 22:11:12.885: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 17 22:11:12.886: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-163'
Dec 17 22:11:13.006: INFO: stderr: ""
Dec 17 22:11:13.006: INFO: stdout: "update-demo-kitten-htdwf update-demo-kitten-s6b4q "
Dec 17 22:11:13.006: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-htdwf -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-163'
Dec 17 22:11:13.083: INFO: stderr: ""
Dec 17 22:11:13.083: INFO: stdout: "true"
Dec 17 22:11:13.084: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-htdwf -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-163'
Dec 17 22:11:13.217: INFO: stderr: ""
Dec 17 22:11:13.217: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 17 22:11:13.217: INFO: validating pod update-demo-kitten-htdwf
Dec 17 22:11:13.244: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 17 22:11:13.244: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 17 22:11:13.244: INFO: update-demo-kitten-htdwf is verified up and running
Dec 17 22:11:13.244: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-s6b4q -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-163'
Dec 17 22:11:13.391: INFO: stderr: ""
Dec 17 22:11:13.391: INFO: stdout: "true"
Dec 17 22:11:13.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-s6b4q -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-163'
Dec 17 22:11:13.525: INFO: stderr: ""
Dec 17 22:11:13.525: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0"
Dec 17 22:11:13.525: INFO: validating pod update-demo-kitten-s6b4q
Dec 17 22:11:13.551: INFO: got data: {
  "image": "kitten.jpg"
}

Dec 17 22:11:13.551: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg .
Dec 17 22:11:13.551: INFO: update-demo-kitten-s6b4q is verified up and running
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:11:13.551: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-163" for this suite.
Dec 17 22:11:43.621: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:11:43.771: INFO: namespace kubectl-163 deletion completed in 30.214317522s

• [SLOW TEST:77.089 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:275
    should do a rolling update of a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:11:43.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Performing setup for networking test in namespace pod-network-test-1702
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 17 22:11:43.909: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 17 22:12:20.411: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.44.0.1&port=8080&tries=1'] Namespace:pod-network-test-1702 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 17 22:12:20.411: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 22:12:20.735: INFO: Waiting for endpoints: map[]
Dec 17 22:12:20.743: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=http&host=10.32.0.4&port=8080&tries=1'] Namespace:pod-network-test-1702 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 17 22:12:20.743: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 22:12:21.010: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:12:21.012: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-1702" for this suite.
Dec 17 22:12:35.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:12:35.275: INFO: namespace pod-network-test-1702 deletion completed in 14.253801292s

• [SLOW TEST:51.503 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:12:35.276: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the rc1
STEP: create the rc2
STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well
STEP: delete the rc simpletest-rc-to-be-deleted
STEP: wait for the rc to be deleted
STEP: Gathering metrics
W1217 22:12:47.395935       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 17 22:12:47.396: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:12:47.396: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-3731" for this suite.
Dec 17 22:13:02.298: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:13:02.736: INFO: namespace gc-3731 deletion completed in 15.336977814s

• [SLOW TEST:27.461 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl describe 
  should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:13:02.738: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should check if kubectl describe prints relevant information for rc and pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 22:13:04.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8031'
Dec 17 22:13:06.669: INFO: stderr: ""
Dec 17 22:13:06.669: INFO: stdout: "replicationcontroller/redis-master created\n"
Dec 17 22:13:06.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8031'
Dec 17 22:13:08.671: INFO: stderr: ""
Dec 17 22:13:08.671: INFO: stdout: "service/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 17 22:13:09.844: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:13:09.845: INFO: Found 0 / 1
Dec 17 22:13:10.923: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:13:10.923: INFO: Found 0 / 1
Dec 17 22:13:11.687: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:13:11.687: INFO: Found 0 / 1
Dec 17 22:13:12.725: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:13:12.726: INFO: Found 0 / 1
Dec 17 22:13:14.063: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:13:14.063: INFO: Found 0 / 1
Dec 17 22:13:14.707: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:13:14.707: INFO: Found 0 / 1
Dec 17 22:13:15.709: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:13:15.709: INFO: Found 0 / 1
Dec 17 22:13:16.688: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:13:16.689: INFO: Found 0 / 1
Dec 17 22:13:17.706: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:13:17.707: INFO: Found 0 / 1
Dec 17 22:13:18.681: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:13:18.682: INFO: Found 0 / 1
Dec 17 22:13:19.684: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:13:19.684: INFO: Found 0 / 1
Dec 17 22:13:20.691: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:13:20.692: INFO: Found 1 / 1
Dec 17 22:13:20.692: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 17 22:13:20.697: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:13:20.697: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 17 22:13:20.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod redis-master-67bs9 --namespace=kubectl-8031'
Dec 17 22:13:20.891: INFO: stderr: ""
Dec 17 22:13:20.892: INFO: stdout: "Name:         redis-master-67bs9\nNamespace:    kubectl-8031\nPriority:     0\nNode:         jerma-node/10.96.2.170\nStart Time:   Tue, 17 Dec 2019 22:13:08 +0000\nLabels:       app=redis\n              role=master\nAnnotations:  \nStatus:       Running\nIP:           10.44.0.1\nIPs:\n  IP:           10.44.0.1\nControlled By:  ReplicationController/redis-master\nContainers:\n  redis-master:\n    Container ID:   docker://1718c8044614870e3e8a86a31ce997b3f1476acab266b9ad95f1f08c02a2e7f5\n    Image:          docker.io/library/redis:5.0.5-alpine\n    Image ID:       docker-pullable://redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858\n    Port:           6379/TCP\n    Host Port:      0/TCP\n    State:          Running\n      Started:      Tue, 17 Dec 2019 22:13:18 +0000\n    Ready:          True\n    Restart Count:  0\n    Environment:    \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-68ppm (ro)\nConditions:\n  Type              Status\n  Initialized       True \n  Ready             True \n  ContainersReady   True \n  PodScheduled      True \nVolumes:\n  default-token-68ppm:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-68ppm\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type    Reason     Age        From                 Message\n  ----    ------     ----       ----                 -------\n  Normal  Scheduled    default-scheduler    Successfully assigned kubectl-8031/redis-master-67bs9 to jerma-node\n  Normal  Pulled     7s         kubelet, jerma-node  Container image \"docker.io/library/redis:5.0.5-alpine\" already present on machine\n  Normal  Created    2s         kubelet, jerma-node  Created container redis-master\n  Normal  Started    2s         kubelet, jerma-node  Started container redis-master\n"
Dec 17 22:13:20.892: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc redis-master --namespace=kubectl-8031'
Dec 17 22:13:21.081: INFO: stderr: ""
Dec 17 22:13:21.081: INFO: stdout: "Name:         redis-master\nNamespace:    kubectl-8031\nSelector:     app=redis,role=master\nLabels:       app=redis\n              role=master\nAnnotations:  \nReplicas:     1 current / 1 desired\nPods Status:  1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n  Labels:  app=redis\n           role=master\n  Containers:\n   redis-master:\n    Image:        docker.io/library/redis:5.0.5-alpine\n    Port:         6379/TCP\n    Host Port:    0/TCP\n    Environment:  \n    Mounts:       \n  Volumes:        \nEvents:\n  Type    Reason            Age   From                    Message\n  ----    ------            ----  ----                    -------\n  Normal  SuccessfulCreate  14s   replication-controller  Created pod: redis-master-67bs9\n"
Dec 17 22:13:21.082: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service redis-master --namespace=kubectl-8031'
Dec 17 22:13:21.185: INFO: stderr: ""
Dec 17 22:13:21.185: INFO: stdout: "Name:              redis-master\nNamespace:         kubectl-8031\nLabels:            app=redis\n                   role=master\nAnnotations:       \nSelector:          app=redis,role=master\nType:              ClusterIP\nIP:                10.101.123.239\nPort:                6379/TCP\nTargetPort:        redis-server/TCP\nEndpoints:         10.44.0.1:6379\nSession Affinity:  None\nEvents:            \n"
Dec 17 22:13:21.190: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-node'
Dec 17 22:13:21.312: INFO: stderr: ""
Dec 17 22:13:21.312: INFO: stdout: "Name:               jerma-node\nRoles:              \nLabels:             beta.kubernetes.io/arch=amd64\n                    beta.kubernetes.io/os=linux\n                    kubernetes.io/arch=amd64\n                    kubernetes.io/hostname=jerma-node\n                    kubernetes.io/os=linux\nAnnotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock\n                    node.alpha.kubernetes.io/ttl: 0\n                    volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp:  Sat, 12 Oct 2019 13:47:49 +0000\nTaints:             \nUnschedulable:      false\nConditions:\n  Type                 Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message\n  ----                 ------  -----------------                 ------------------                ------                       -------\n  NetworkUnavailable   False   Tue, 17 Dec 2019 21:23:22 +0000   Tue, 17 Dec 2019 21:23:22 +0000   WeaveIsUp                    Weave pod has set this\n  MemoryPressure       False   Tue, 17 Dec 2019 22:12:38 +0000   Sat, 12 Oct 2019 13:47:49 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available\n  DiskPressure         False   Tue, 17 Dec 2019 22:12:38 +0000   Sat, 12 Oct 2019 13:47:49 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure\n  PIDPressure          False   Tue, 17 Dec 2019 22:12:38 +0000   Sat, 12 Oct 2019 13:47:49 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available\n  Ready                True    Tue, 17 Dec 2019 22:12:38 +0000   Sat, 12 Oct 2019 13:48:29 +0000   KubeletReady                 kubelet is posting ready status. AppArmor enabled\nAddresses:\n  InternalIP:  10.96.2.170\n  Hostname:    jerma-node\nCapacity:\n cpu:                4\n ephemeral-storage:  20145724Ki\n hugepages-2Mi:      0\n memory:             4039076Ki\n pods:               110\nAllocatable:\n cpu:                4\n ephemeral-storage:  18566299208\n hugepages-2Mi:      0\n memory:             3936676Ki\n pods:               110\nSystem Info:\n Machine ID:                 4eaf1504b38c4046a625a134490a5292\n System UUID:                4EAF1504-B38C-4046-A625-A134490A5292\n Boot ID:                    be260572-5100-4207-9fbc-2294735ff8aa\n Kernel Version:             4.15.0-52-generic\n OS Image:                   Ubuntu 18.04.2 LTS\n Operating System:           linux\n Architecture:               amd64\n Container Runtime Version:  docker://18.9.7\n Kubelet Version:            v1.16.1\n Kube-Proxy Version:         v1.16.1\nNon-terminated Pods:         (3 in total)\n  Namespace                  Name                  CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE\n  ---------                  ----                  ------------  ----------  ---------------  -------------  ---\n  kube-system                kube-proxy-jcjl4      0 (0%)        0 (0%)      0 (0%)           0 (0%)         66d\n  kube-system                weave-net-srfjj       20m (0%)      0 (0%)      0 (0%)           0 (0%)         50m\n  kubectl-8031               redis-master-67bs9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14s\nAllocated resources:\n  (Total limits may be over 100 percent, i.e., overcommitted.)\n  Resource           Requests  Limits\n  --------           --------  ------\n  cpu                20m (0%)  0 (0%)\n  memory             0 (0%)    0 (0%)\n  ephemeral-storage  0 (0%)    0 (0%)\nEvents:              \n"
Dec 17 22:13:21.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-8031'
Dec 17 22:13:21.413: INFO: stderr: ""
Dec 17 22:13:21.414: INFO: stdout: "Name:         kubectl-8031\nLabels:       e2e-framework=kubectl\n              e2e-run=71e6d18d-bb54-4e41-b520-5b2a34a6d31b\nAnnotations:  \nStatus:       Active\n\nNo resource quota.\n\nNo resource limits.\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:13:21.414: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8031" for this suite.
Dec 17 22:13:49.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:13:49.614: INFO: namespace kubectl-8031 deletion completed in 28.193621747s

• [SLOW TEST:46.876 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl describe
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1000
    should check if kubectl describe prints relevant information for rc and pods  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context when creating containers with AllowPrivilegeEscalation 
  should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:13:49.616: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
[It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 22:13:49.729: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-8bda82eb-a762-4b99-a81f-719c7b453baf" in namespace "security-context-test-5083" to be "success or failure"
Dec 17 22:13:49.751: INFO: Pod "alpine-nnp-false-8bda82eb-a762-4b99-a81f-719c7b453baf": Phase="Pending", Reason="", readiness=false. Elapsed: 22.12955ms
Dec 17 22:13:51.761: INFO: Pod "alpine-nnp-false-8bda82eb-a762-4b99-a81f-719c7b453baf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032878725s
Dec 17 22:13:53.792: INFO: Pod "alpine-nnp-false-8bda82eb-a762-4b99-a81f-719c7b453baf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.063179083s
Dec 17 22:13:55.856: INFO: Pod "alpine-nnp-false-8bda82eb-a762-4b99-a81f-719c7b453baf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.127602527s
Dec 17 22:13:57.890: INFO: Pod "alpine-nnp-false-8bda82eb-a762-4b99-a81f-719c7b453baf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.1615887s
Dec 17 22:13:57.891: INFO: Pod "alpine-nnp-false-8bda82eb-a762-4b99-a81f-719c7b453baf" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:13:57.926: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-5083" for this suite.
Dec 17 22:14:04.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:14:04.121: INFO: namespace security-context-test-5083 deletion completed in 6.185175229s

• [SLOW TEST:14.505 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  when creating containers with AllowPrivilegeEscalation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:277
    should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl api-versions 
  should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:14:04.122: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should check if v1 is in available api versions  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: validating api versions
Dec 17 22:14:04.216: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions'
Dec 17 22:14:04.498: INFO: stderr: ""
Dec 17 22:14:04.498: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:14:04.499: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-990" for this suite.
Dec 17 22:14:10.549: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:14:10.739: INFO: namespace kubectl-990 deletion completed in 6.227470885s

• [SLOW TEST:6.618 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl api-versions
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:738
    should check if v1 is in available api versions  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:14:10.741: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 22:14:10.896: INFO: Waiting up to 5m0s for pod "downwardapi-volume-11699761-c944-405f-8fc6-f26e621d465b" in namespace "downward-api-7690" to be "success or failure"
Dec 17 22:14:10.912: INFO: Pod "downwardapi-volume-11699761-c944-405f-8fc6-f26e621d465b": Phase="Pending", Reason="", readiness=false. Elapsed: 15.817363ms
Dec 17 22:14:12.921: INFO: Pod "downwardapi-volume-11699761-c944-405f-8fc6-f26e621d465b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024737536s
Dec 17 22:14:14.933: INFO: Pod "downwardapi-volume-11699761-c944-405f-8fc6-f26e621d465b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03675809s
Dec 17 22:14:16.946: INFO: Pod "downwardapi-volume-11699761-c944-405f-8fc6-f26e621d465b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.04934508s
Dec 17 22:14:18.964: INFO: Pod "downwardapi-volume-11699761-c944-405f-8fc6-f26e621d465b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.067856193s
Dec 17 22:14:20.973: INFO: Pod "downwardapi-volume-11699761-c944-405f-8fc6-f26e621d465b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.07667383s
STEP: Saw pod success
Dec 17 22:14:20.973: INFO: Pod "downwardapi-volume-11699761-c944-405f-8fc6-f26e621d465b" satisfied condition "success or failure"
Dec 17 22:14:20.976: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-11699761-c944-405f-8fc6-f26e621d465b container client-container: 
STEP: delete the pod
Dec 17 22:14:21.050: INFO: Waiting for pod downwardapi-volume-11699761-c944-405f-8fc6-f26e621d465b to disappear
Dec 17 22:14:21.058: INFO: Pod downwardapi-volume-11699761-c944-405f-8fc6-f26e621d465b no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:14:21.059: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-7690" for this suite.
Dec 17 22:14:27.087: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:14:27.163: INFO: namespace downward-api-7690 deletion completed in 6.099337718s

• [SLOW TEST:16.423 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSS
------------------------------
[sig-node] Downward API 
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:14:27.164: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward api env vars
Dec 17 22:14:27.291: INFO: Waiting up to 5m0s for pod "downward-api-9a721492-c0fd-497e-a085-4d40bd6a473d" in namespace "downward-api-3200" to be "success or failure"
Dec 17 22:14:27.306: INFO: Pod "downward-api-9a721492-c0fd-497e-a085-4d40bd6a473d": Phase="Pending", Reason="", readiness=false. Elapsed: 14.76917ms
Dec 17 22:14:29.380: INFO: Pod "downward-api-9a721492-c0fd-497e-a085-4d40bd6a473d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088095024s
Dec 17 22:14:31.424: INFO: Pod "downward-api-9a721492-c0fd-497e-a085-4d40bd6a473d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.132565178s
Dec 17 22:14:33.437: INFO: Pod "downward-api-9a721492-c0fd-497e-a085-4d40bd6a473d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.145168356s
Dec 17 22:14:35.450: INFO: Pod "downward-api-9a721492-c0fd-497e-a085-4d40bd6a473d": Phase="Pending", Reason="", readiness=false. Elapsed: 8.158370232s
Dec 17 22:14:37.462: INFO: Pod "downward-api-9a721492-c0fd-497e-a085-4d40bd6a473d": Phase="Pending", Reason="", readiness=false. Elapsed: 10.170843651s
Dec 17 22:14:39.507: INFO: Pod "downward-api-9a721492-c0fd-497e-a085-4d40bd6a473d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.215726162s
STEP: Saw pod success
Dec 17 22:14:39.507: INFO: Pod "downward-api-9a721492-c0fd-497e-a085-4d40bd6a473d" satisfied condition "success or failure"
Dec 17 22:14:39.514: INFO: Trying to get logs from node jerma-node pod downward-api-9a721492-c0fd-497e-a085-4d40bd6a473d container dapi-container: 
STEP: delete the pod
Dec 17 22:14:39.597: INFO: Waiting for pod downward-api-9a721492-c0fd-497e-a085-4d40bd6a473d to disappear
Dec 17 22:14:39.699: INFO: Pod downward-api-9a721492-c0fd-497e-a085-4d40bd6a473d no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:14:39.700: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-3200" for this suite.
Dec 17 22:14:45.753: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:14:45.890: INFO: namespace downward-api-3200 deletion completed in 6.18151124s

• [SLOW TEST:18.727 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide host IP as an env var [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:14:45.892: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 17 22:14:46.641: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 17 22:14:48.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217686, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217686, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217686, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217686, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:14:50.681: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217686, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217686, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217686, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217686, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:14:52.675: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217686, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217686, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217686, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217686, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 17 22:14:55.737: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod
STEP: 'kubectl attach' the pod, should be denied by the webhook
Dec 17 22:15:04.011: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-1316 to-be-attached-pod -i -c=container1'
Dec 17 22:15:04.179: INFO: rc: 1
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:15:04.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-1316" for this suite.
Dec 17 22:15:32.292: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:15:32.384: INFO: namespace webhook-1316 deletion completed in 28.145316992s
STEP: Destroying namespace "webhook-1316-markers" for this suite.
Dec 17 22:15:38.408: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:15:38.574: INFO: namespace webhook-1316-markers deletion completed in 6.189800119s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:52.722 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny attaching pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:15:38.615: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test substitution in container's args
Dec 17 22:15:38.742: INFO: Waiting up to 5m0s for pod "var-expansion-c5f412b5-0e11-4799-a396-8b9bdabd8db9" in namespace "var-expansion-4225" to be "success or failure"
Dec 17 22:15:38.782: INFO: Pod "var-expansion-c5f412b5-0e11-4799-a396-8b9bdabd8db9": Phase="Pending", Reason="", readiness=false. Elapsed: 39.536068ms
Dec 17 22:15:40.792: INFO: Pod "var-expansion-c5f412b5-0e11-4799-a396-8b9bdabd8db9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049798794s
Dec 17 22:15:42.801: INFO: Pod "var-expansion-c5f412b5-0e11-4799-a396-8b9bdabd8db9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058130105s
Dec 17 22:15:44.811: INFO: Pod "var-expansion-c5f412b5-0e11-4799-a396-8b9bdabd8db9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06892948s
Dec 17 22:15:46.817: INFO: Pod "var-expansion-c5f412b5-0e11-4799-a396-8b9bdabd8db9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074602284s
STEP: Saw pod success
Dec 17 22:15:46.817: INFO: Pod "var-expansion-c5f412b5-0e11-4799-a396-8b9bdabd8db9" satisfied condition "success or failure"
Dec 17 22:15:46.822: INFO: Trying to get logs from node jerma-node pod var-expansion-c5f412b5-0e11-4799-a396-8b9bdabd8db9 container dapi-container: 
STEP: delete the pod
Dec 17 22:15:46.860: INFO: Waiting for pod var-expansion-c5f412b5-0e11-4799-a396-8b9bdabd8db9 to disappear
Dec 17 22:15:46.867: INFO: Pod var-expansion-c5f412b5-0e11-4799-a396-8b9bdabd8db9 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:15:46.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-4225" for this suite.
Dec 17 22:15:52.897: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:15:53.040: INFO: namespace var-expansion-4225 deletion completed in 6.166467788s

• [SLOW TEST:14.424 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should allow substituting values in a container's args [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class 
  should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:15:53.040: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:179
[It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying QOS class is set on the pod
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:15:53.384: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9293" for this suite.
Dec 17 22:16:05.426: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:16:05.568: INFO: namespace pods-9293 deletion completed in 12.171808446s

• [SLOW TEST:12.528 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  [k8s.io] Pods Set QOS Class
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
    should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:16:05.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating pod pod-subpath-test-configmap-xqvq
STEP: Creating a pod to test atomic-volume-subpath
Dec 17 22:16:05.692: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-xqvq" in namespace "subpath-914" to be "success or failure"
Dec 17 22:16:05.699: INFO: Pod "pod-subpath-test-configmap-xqvq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.832977ms
Dec 17 22:16:07.721: INFO: Pod "pod-subpath-test-configmap-xqvq": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029217984s
Dec 17 22:16:09.729: INFO: Pod "pod-subpath-test-configmap-xqvq": Phase="Pending", Reason="", readiness=false. Elapsed: 4.036989648s
Dec 17 22:16:11.740: INFO: Pod "pod-subpath-test-configmap-xqvq": Phase="Pending", Reason="", readiness=false. Elapsed: 6.047288793s
Dec 17 22:16:13.932: INFO: Pod "pod-subpath-test-configmap-xqvq": Phase="Running", Reason="", readiness=true. Elapsed: 8.239626411s
Dec 17 22:16:15.940: INFO: Pod "pod-subpath-test-configmap-xqvq": Phase="Running", Reason="", readiness=true. Elapsed: 10.247975061s
Dec 17 22:16:17.948: INFO: Pod "pod-subpath-test-configmap-xqvq": Phase="Running", Reason="", readiness=true. Elapsed: 12.255747468s
Dec 17 22:16:19.959: INFO: Pod "pod-subpath-test-configmap-xqvq": Phase="Running", Reason="", readiness=true. Elapsed: 14.266946416s
Dec 17 22:16:21.970: INFO: Pod "pod-subpath-test-configmap-xqvq": Phase="Running", Reason="", readiness=true. Elapsed: 16.277399959s
Dec 17 22:16:23.989: INFO: Pod "pod-subpath-test-configmap-xqvq": Phase="Running", Reason="", readiness=true. Elapsed: 18.296333775s
Dec 17 22:16:26.002: INFO: Pod "pod-subpath-test-configmap-xqvq": Phase="Running", Reason="", readiness=true. Elapsed: 20.30952694s
Dec 17 22:16:28.017: INFO: Pod "pod-subpath-test-configmap-xqvq": Phase="Running", Reason="", readiness=true. Elapsed: 22.324886439s
Dec 17 22:16:30.027: INFO: Pod "pod-subpath-test-configmap-xqvq": Phase="Running", Reason="", readiness=true. Elapsed: 24.334381538s
Dec 17 22:16:32.039: INFO: Pod "pod-subpath-test-configmap-xqvq": Phase="Running", Reason="", readiness=true. Elapsed: 26.347077895s
Dec 17 22:16:34.051: INFO: Pod "pod-subpath-test-configmap-xqvq": Phase="Running", Reason="", readiness=true. Elapsed: 28.358879583s
Dec 17 22:16:36.059: INFO: Pod "pod-subpath-test-configmap-xqvq": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.366344823s
STEP: Saw pod success
Dec 17 22:16:36.059: INFO: Pod "pod-subpath-test-configmap-xqvq" satisfied condition "success or failure"
Dec 17 22:16:36.064: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-xqvq container test-container-subpath-configmap-xqvq: 
STEP: delete the pod
Dec 17 22:16:36.324: INFO: Waiting for pod pod-subpath-test-configmap-xqvq to disappear
Dec 17 22:16:36.330: INFO: Pod pod-subpath-test-configmap-xqvq no longer exists
STEP: Deleting pod pod-subpath-test-configmap-xqvq
Dec 17 22:16:36.330: INFO: Deleting pod "pod-subpath-test-configmap-xqvq" in namespace "subpath-914"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:16:36.334: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-914" for this suite.
Dec 17 22:16:42.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:16:42.555: INFO: namespace subpath-914 deletion completed in 6.214913224s

• [SLOW TEST:36.987 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:16:42.559: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation
Dec 17 22:16:42.660: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 22:16:46.610: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:16:59.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2161" for this suite.
Dec 17 22:17:05.833: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:17:06.008: INFO: namespace crd-publish-openapi-2161 deletion completed in 6.215968657s

• [SLOW TEST:23.449 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group and version but different kinds [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:17:06.008: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
Dec 17 22:17:06.076: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 17 22:17:06.088: INFO: Waiting for terminating namespaces to be deleted...
Dec 17 22:17:06.090: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Dec 17 22:17:06.118: INFO: weave-net-srfjj from kube-system started at 2019-12-17 21:23:16 +0000 UTC (2 container statuses recorded)
Dec 17 22:17:06.118: INFO: 	Container weave ready: true, restart count 0
Dec 17 22:17:06.118: INFO: 	Container weave-npc ready: true, restart count 0
Dec 17 22:17:06.118: INFO: kube-proxy-jcjl4 from kube-system started at 2019-10-12 13:47:49 +0000 UTC (1 container statuses recorded)
Dec 17 22:17:06.118: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 17 22:17:06.118: INFO: 
Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit before test
Dec 17 22:17:06.149: INFO: coredns-5644d7b6d9-rqwzj from kube-system started at 2019-11-10 18:03:38 +0000 UTC (0 container statuses recorded)
Dec 17 22:17:06.149: INFO: weave-net-gsjjk from kube-system started at 2019-12-13 09:16:56 +0000 UTC (2 container statuses recorded)
Dec 17 22:17:06.149: INFO: 	Container weave ready: true, restart count 0
Dec 17 22:17:06.149: INFO: 	Container weave-npc ready: true, restart count 0
Dec 17 22:17:06.149: INFO: coredns-5644d7b6d9-9sj58 from kube-system started at 2019-12-14 15:12:12 +0000 UTC (1 container statuses recorded)
Dec 17 22:17:06.149: INFO: 	Container coredns ready: true, restart count 0
Dec 17 22:17:06.149: INFO: kube-scheduler-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:42 +0000 UTC (1 container statuses recorded)
Dec 17 22:17:06.149: INFO: 	Container kube-scheduler ready: true, restart count 11
Dec 17 22:17:06.149: INFO: kube-proxy-bdcvr from kube-system started at 2019-12-13 09:08:20 +0000 UTC (1 container statuses recorded)
Dec 17 22:17:06.149: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 17 22:17:06.149: INFO: coredns-5644d7b6d9-xvlxj from kube-system started at 2019-12-14 16:49:52 +0000 UTC (1 container statuses recorded)
Dec 17 22:17:06.149: INFO: 	Container coredns ready: true, restart count 0
Dec 17 22:17:06.149: INFO: etcd-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:37 +0000 UTC (1 container statuses recorded)
Dec 17 22:17:06.149: INFO: 	Container etcd ready: true, restart count 1
Dec 17 22:17:06.149: INFO: kube-controller-manager-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:40 +0000 UTC (1 container statuses recorded)
Dec 17 22:17:06.149: INFO: 	Container kube-controller-manager ready: true, restart count 8
Dec 17 22:17:06.149: INFO: kube-apiserver-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:38 +0000 UTC (1 container statuses recorded)
Dec 17 22:17:06.149: INFO: 	Container kube-apiserver ready: true, restart count 1
Dec 17 22:17:06.149: INFO: coredns-5644d7b6d9-n9kkw from kube-system started at 2019-11-10 16:39:08 +0000 UTC (0 container statuses recorded)
[It] validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-6daa3789-0d52-4807-b22d-7c2b0646a34a 42
STEP: Trying to relaunch the pod, now with labels.
STEP: removing the label kubernetes.io/e2e-6daa3789-0d52-4807-b22d-7c2b0646a34a off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-6daa3789-0d52-4807-b22d-7c2b0646a34a
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:17:22.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-6586" for this suite.
Dec 17 22:17:42.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:17:42.618: INFO: namespace sched-pred-6586 deletion completed in 20.173146337s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78

• [SLOW TEST:36.610 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:17:42.619: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
[It] deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 22:17:42.707: INFO: Pod name cleanup-pod: Found 0 pods out of 1
Dec 17 22:17:47.716: INFO: Pod name cleanup-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 17 22:17:51.734: INFO: Creating deployment test-cleanup-deployment
STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62
Dec 17 22:17:59.802: INFO: Deployment "test-cleanup-deployment":
&Deployment{ObjectMeta:{test-cleanup-deployment  deployment-1359 /apis/apps/v1/namespaces/deployment-1359/deployments/test-cleanup-deployment 5560af6e-ab8c-4de7-9ec5-a59df7ea730a 9148657 1 2019-12-17 22:17:51 +0000 UTC   map[name:cleanup-pod] map[deployment.kubernetes.io/revision:1] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod] map[] [] []  []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00315a778  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2019-12-17 22:17:51 +0000 UTC,LastTransitionTime:2019-12-17 22:17:51 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-cleanup-deployment-65db99849b" has successfully progressed.,LastUpdateTime:2019-12-17 22:17:58 +0000 UTC,LastTransitionTime:2019-12-17 22:17:51 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Dec 17 22:17:59.807: INFO: New ReplicaSet "test-cleanup-deployment-65db99849b" of Deployment "test-cleanup-deployment":
&ReplicaSet{ObjectMeta:{test-cleanup-deployment-65db99849b  deployment-1359 /apis/apps/v1/namespaces/deployment-1359/replicasets/test-cleanup-deployment-65db99849b abef504f-dbbe-4d8c-91cb-eea1ca007a9a 9148645 1 2019-12-17 22:17:51 +0000 UTC   map[name:cleanup-pod pod-template-hash:65db99849b] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 5560af6e-ab8c-4de7-9ec5-a59df7ea730a 0xc006bffa07 0xc006bffa08}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 65db99849b,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:cleanup-pod pod-template-hash:65db99849b] map[] [] []  []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc006bffa78  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Dec 17 22:17:59.813: INFO: Pod "test-cleanup-deployment-65db99849b-845rn" is available:
&Pod{ObjectMeta:{test-cleanup-deployment-65db99849b-845rn test-cleanup-deployment-65db99849b- deployment-1359 /api/v1/namespaces/deployment-1359/pods/test-cleanup-deployment-65db99849b-845rn 4146ce9a-213f-4dd4-ba38-8c62edcbcfe0 9148644 0 2019-12-17 22:17:51 +0000 UTC   map[name:cleanup-pod pod-template-hash:65db99849b] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-65db99849b abef504f-dbbe-4d8c-91cb-eea1ca007a9a 0xc002f16047 0xc002f16048}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ng2nf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ng2nf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:redis,Image:docker.io/library/redis:5.0.5-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ng2nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 22:17:51 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 22:17:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 22:17:58 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 22:17:51 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.2,StartTime:2019-12-17 22:17:51 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:redis,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-17 22:17:57 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:redis:5.0.5-alpine,ImageID:docker-pullable://redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858,ContainerID:docker://9b6cf7f53914ce01176f025256ea99efaa268259ca76db81e68ade68258dd65c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.2,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:17:59.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-1359" for this suite.
Dec 17 22:18:05.893: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:18:06.079: INFO: namespace deployment-1359 deletion completed in 6.26038263s

• [SLOW TEST:23.460 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should delete old replica sets [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:18:06.080: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: set up a multi version CRD
Dec 17 22:18:06.169: INFO: >>> kubeConfig: /root/.kube/config
STEP: mark a version not serverd
STEP: check the unserved version gets removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:18:28.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-5880" for this suite.
Dec 17 22:18:34.306: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:18:34.452: INFO: namespace crd-publish-openapi-5880 deletion completed in 6.304293687s

• [SLOW TEST:28.373 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  removes definition from spec when one version gets changed to not be served [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:18:34.454: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a ResourceQuota with best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not best effort scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a best-effort pod
STEP: Ensuring resource quota with best effort scope captures the pod usage
STEP: Ensuring resource quota with not best effort ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a not best-effort pod
STEP: Ensuring resource quota with not best effort scope captures the pod usage
STEP: Ensuring resource quota with best effort scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:18:50.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3584" for this suite.
Dec 17 22:18:57.003: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:18:57.110: INFO: namespace resourcequota-3584 deletion completed in 6.132324032s

• [SLOW TEST:22.656 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with best effort scope. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[sig-storage] EmptyDir volumes 
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:18:57.111: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating Pod
STEP: Waiting for the pod running
STEP: Geting the pod
STEP: Reading file content from the nginx-container
Dec 17 22:19:09.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec pod-sharedvolume-29c9346c-3e15-4a68-947f-d6159ef00e56 -c busybox-main-container --namespace=emptydir-9404 -- cat /usr/share/volumeshare/shareddata.txt'
Dec 17 22:19:09.763: INFO: stderr: ""
Dec 17 22:19:09.763: INFO: stdout: "Hello from the busy-box sub-container\n"
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:19:09.763: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-9404" for this suite.
Dec 17 22:19:15.862: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:19:15.965: INFO: namespace emptydir-9404 deletion completed in 6.189260913s

• [SLOW TEST:18.855 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  pod should support shared volumes between containers [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSS
------------------------------
[sig-apps] ReplicationController 
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:19:15.967: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Given a Pod with a 'name' label pod-adoption is created
STEP: When a replication controller with a matching selector is created
STEP: Then the orphan pod is adopted
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:19:27.138: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-2352" for this suite.
Dec 17 22:19:39.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:19:39.503: INFO: namespace replication-controller-2352 deletion completed in 12.357476575s

• [SLOW TEST:23.537 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching pods on creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:19:39.504: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 17 22:19:40.138: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 17 22:19:42.167: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217980, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217980, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217980, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217980, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:19:44.176: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217980, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217980, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217980, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217980, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:19:46.175: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217980, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217980, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217980, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712217980, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 17 22:19:49.267: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: fetching the /apis discovery document
STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/admissionregistration.k8s.io discovery document
STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document
STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document
STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:19:49.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3844" for this suite.
Dec 17 22:19:57.334: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:19:57.548: INFO: namespace webhook-3844 deletion completed in 8.259692081s
STEP: Destroying namespace "webhook-3844-markers" for this suite.
Dec 17 22:20:03.599: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:20:03.742: INFO: namespace webhook-3844-markers deletion completed in 6.193316907s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:24.254 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include webhook resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:20:03.760: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 17 22:20:04.049: INFO: Number of nodes with available pods: 0
Dec 17 22:20:04.049: INFO: Node jerma-node is running more than one daemon pod
Dec 17 22:20:06.047: INFO: Number of nodes with available pods: 0
Dec 17 22:20:06.047: INFO: Node jerma-node is running more than one daemon pod
Dec 17 22:20:06.799: INFO: Number of nodes with available pods: 0
Dec 17 22:20:06.799: INFO: Node jerma-node is running more than one daemon pod
Dec 17 22:20:07.064: INFO: Number of nodes with available pods: 0
Dec 17 22:20:07.064: INFO: Node jerma-node is running more than one daemon pod
Dec 17 22:20:08.222: INFO: Number of nodes with available pods: 0
Dec 17 22:20:08.222: INFO: Node jerma-node is running more than one daemon pod
Dec 17 22:20:09.311: INFO: Number of nodes with available pods: 0
Dec 17 22:20:09.311: INFO: Node jerma-node is running more than one daemon pod
Dec 17 22:20:11.267: INFO: Number of nodes with available pods: 0
Dec 17 22:20:11.267: INFO: Node jerma-node is running more than one daemon pod
Dec 17 22:20:12.064: INFO: Number of nodes with available pods: 0
Dec 17 22:20:12.064: INFO: Node jerma-node is running more than one daemon pod
Dec 17 22:20:13.062: INFO: Number of nodes with available pods: 0
Dec 17 22:20:13.062: INFO: Node jerma-node is running more than one daemon pod
Dec 17 22:20:14.063: INFO: Number of nodes with available pods: 1
Dec 17 22:20:14.064: INFO: Node jerma-node is running more than one daemon pod
Dec 17 22:20:15.065: INFO: Number of nodes with available pods: 2
Dec 17 22:20:15.066: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived.
Dec 17 22:20:15.125: INFO: Number of nodes with available pods: 1
Dec 17 22:20:15.126: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 22:20:16.790: INFO: Number of nodes with available pods: 1
Dec 17 22:20:16.790: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 22:20:17.689: INFO: Number of nodes with available pods: 1
Dec 17 22:20:17.690: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 22:20:18.146: INFO: Number of nodes with available pods: 1
Dec 17 22:20:18.146: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 22:20:19.600: INFO: Number of nodes with available pods: 1
Dec 17 22:20:19.600: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 22:20:20.146: INFO: Number of nodes with available pods: 1
Dec 17 22:20:20.148: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 22:20:21.722: INFO: Number of nodes with available pods: 1
Dec 17 22:20:21.722: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 22:20:22.150: INFO: Number of nodes with available pods: 1
Dec 17 22:20:22.151: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 22:20:23.492: INFO: Number of nodes with available pods: 1
Dec 17 22:20:23.492: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 22:20:24.151: INFO: Number of nodes with available pods: 1
Dec 17 22:20:24.151: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 22:20:25.157: INFO: Number of nodes with available pods: 2
Dec 17 22:20:25.157: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Wait for the failed daemon pod to be completely deleted.
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2854, will wait for the garbage collector to delete the pods
Dec 17 22:20:25.255: INFO: Deleting DaemonSet.extensions daemon-set took: 38.566749ms
Dec 17 22:20:25.557: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.405803ms
Dec 17 22:20:36.671: INFO: Number of nodes with available pods: 0
Dec 17 22:20:36.671: INFO: Number of running nodes: 0, number of available pods: 0
Dec 17 22:20:36.678: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2854/daemonsets","resourceVersion":"9149141"},"items":null}

Dec 17 22:20:36.681: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2854/pods","resourceVersion":"9149141"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:20:36.717: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2854" for this suite.
Dec 17 22:20:42.750: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:20:42.860: INFO: namespace daemonsets-2854 deletion completed in 6.138826515s

• [SLOW TEST:39.100 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should retry creating failed daemon pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:20:42.861: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Given a ReplicationController is created
STEP: When the matched label of one of its pods change
Dec 17 22:20:42.953: INFO: Pod name pod-release: Found 0 pods out of 1
Dec 17 22:20:47.965: INFO: Pod name pod-release: Found 1 pods out of 1
STEP: Then the pod is released
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:20:48.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-956" for this suite.
Dec 17 22:20:54.301: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:20:54.418: INFO: namespace replication-controller-956 deletion completed in 6.34044355s

• [SLOW TEST:11.557 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should release no longer matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[k8s.io] Variable Expansion 
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:20:54.419: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test env composition
Dec 17 22:20:54.655: INFO: Waiting up to 5m0s for pod "var-expansion-16785e0b-6b85-4046-a101-b27521060224" in namespace "var-expansion-7932" to be "success or failure"
Dec 17 22:20:54.789: INFO: Pod "var-expansion-16785e0b-6b85-4046-a101-b27521060224": Phase="Pending", Reason="", readiness=false. Elapsed: 133.753855ms
Dec 17 22:20:56.813: INFO: Pod "var-expansion-16785e0b-6b85-4046-a101-b27521060224": Phase="Pending", Reason="", readiness=false. Elapsed: 2.158204251s
Dec 17 22:20:58.823: INFO: Pod "var-expansion-16785e0b-6b85-4046-a101-b27521060224": Phase="Pending", Reason="", readiness=false. Elapsed: 4.16806812s
Dec 17 22:21:00.838: INFO: Pod "var-expansion-16785e0b-6b85-4046-a101-b27521060224": Phase="Pending", Reason="", readiness=false. Elapsed: 6.183098358s
Dec 17 22:21:02.957: INFO: Pod "var-expansion-16785e0b-6b85-4046-a101-b27521060224": Phase="Pending", Reason="", readiness=false. Elapsed: 8.302335679s
Dec 17 22:21:05.004: INFO: Pod "var-expansion-16785e0b-6b85-4046-a101-b27521060224": Phase="Pending", Reason="", readiness=false. Elapsed: 10.349113127s
Dec 17 22:21:07.706: INFO: Pod "var-expansion-16785e0b-6b85-4046-a101-b27521060224": Phase="Pending", Reason="", readiness=false. Elapsed: 13.05079424s
Dec 17 22:21:09.717: INFO: Pod "var-expansion-16785e0b-6b85-4046-a101-b27521060224": Phase="Succeeded", Reason="", readiness=false. Elapsed: 15.062175958s
STEP: Saw pod success
Dec 17 22:21:09.717: INFO: Pod "var-expansion-16785e0b-6b85-4046-a101-b27521060224" satisfied condition "success or failure"
Dec 17 22:21:09.724: INFO: Trying to get logs from node jerma-node pod var-expansion-16785e0b-6b85-4046-a101-b27521060224 container dapi-container: 
STEP: delete the pod
Dec 17 22:21:09.871: INFO: Waiting for pod var-expansion-16785e0b-6b85-4046-a101-b27521060224 to disappear
Dec 17 22:21:09.877: INFO: Pod var-expansion-16785e0b-6b85-4046-a101-b27521060224 no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:21:09.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-7932" for this suite.
Dec 17 22:21:15.912: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:21:16.045: INFO: namespace var-expansion-7932 deletion completed in 6.159191158s

• [SLOW TEST:21.626 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should allow composing env vars into new env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:21:16.047: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
[It] should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating pod
Dec 17 22:21:24.194: INFO: Pod pod-hostip-24ea564c-f7f5-4c70-943c-45e059ee5610 has hostIP: 10.96.2.170
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:21:24.194: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3336" for this suite.
Dec 17 22:21:52.235: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:21:52.358: INFO: namespace pods-3336 deletion completed in 28.15640085s

• [SLOW TEST:36.311 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should get a host IP [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:21:52.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test override all
Dec 17 22:21:52.426: INFO: Waiting up to 5m0s for pod "client-containers-f1e27e6b-dd25-4ece-8586-4a6b6a9c6232" in namespace "containers-7691" to be "success or failure"
Dec 17 22:21:52.437: INFO: Pod "client-containers-f1e27e6b-dd25-4ece-8586-4a6b6a9c6232": Phase="Pending", Reason="", readiness=false. Elapsed: 10.253934ms
Dec 17 22:21:54.451: INFO: Pod "client-containers-f1e27e6b-dd25-4ece-8586-4a6b6a9c6232": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024296776s
Dec 17 22:21:56.462: INFO: Pod "client-containers-f1e27e6b-dd25-4ece-8586-4a6b6a9c6232": Phase="Pending", Reason="", readiness=false. Elapsed: 4.035995249s
Dec 17 22:21:58.481: INFO: Pod "client-containers-f1e27e6b-dd25-4ece-8586-4a6b6a9c6232": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054393085s
Dec 17 22:22:00.495: INFO: Pod "client-containers-f1e27e6b-dd25-4ece-8586-4a6b6a9c6232": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068499636s
STEP: Saw pod success
Dec 17 22:22:00.495: INFO: Pod "client-containers-f1e27e6b-dd25-4ece-8586-4a6b6a9c6232" satisfied condition "success or failure"
Dec 17 22:22:00.500: INFO: Trying to get logs from node jerma-node pod client-containers-f1e27e6b-dd25-4ece-8586-4a6b6a9c6232 container test-container: 
STEP: delete the pod
Dec 17 22:22:00.573: INFO: Waiting for pod client-containers-f1e27e6b-dd25-4ece-8586-4a6b6a9c6232 to disappear
Dec 17 22:22:00.635: INFO: Pod client-containers-f1e27e6b-dd25-4ece-8586-4a6b6a9c6232 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:22:00.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-7691" for this suite.
Dec 17 22:22:06.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:22:06.843: INFO: namespace containers-7691 deletion completed in 6.18676393s

• [SLOW TEST:14.484 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should be able to override the image's default command and arguments [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[sig-api-machinery] Watchers 
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:22:06.844: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating a watch on configmaps
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: closing the watch once it receives two notifications
Dec 17 22:22:07.014: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7158 /api/v1/namespaces/watch-7158/configmaps/e2e-watch-test-watch-closed 215ac18a-67e5-4f18-a318-9407b221bc92 9149400 0 2019-12-17 22:22:07 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 17 22:22:07.014: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7158 /api/v1/namespaces/watch-7158/configmaps/e2e-watch-test-watch-closed 215ac18a-67e5-4f18-a318-9407b221bc92 9149401 0 2019-12-17 22:22:07 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time, while the watch is closed
STEP: creating a new watch on configmaps from the last resource version observed by the first watch
STEP: deleting the configmap
STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed
Dec 17 22:22:07.038: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7158 /api/v1/namespaces/watch-7158/configmaps/e2e-watch-test-watch-closed 215ac18a-67e5-4f18-a318-9407b221bc92 9149402 0 2019-12-17 22:22:07 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 17 22:22:07.038: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed  watch-7158 /api/v1/namespaces/watch-7158/configmaps/e2e-watch-test-watch-closed 215ac18a-67e5-4f18-a318-9407b221bc92 9149403 0 2019-12-17 22:22:07 +0000 UTC   map[watch-this-configmap:watch-closed-and-restarted] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:22:07.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7158" for this suite.
Dec 17 22:22:13.062: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:22:13.181: INFO: namespace watch-7158 deletion completed in 6.13597611s

• [SLOW TEST:6.337 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to restart watching from the last resource version observed by the previous watch [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:22:13.182: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name projected-configmap-test-volume-map-93ca14e5-01ac-4d11-bb59-53d4806d01d0
STEP: Creating a pod to test consume configMaps
Dec 17 22:22:13.326: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-0aeddf53-9f29-4153-827f-83191e4b1a97" in namespace "projected-5133" to be "success or failure"
Dec 17 22:22:13.334: INFO: Pod "pod-projected-configmaps-0aeddf53-9f29-4153-827f-83191e4b1a97": Phase="Pending", Reason="", readiness=false. Elapsed: 7.764196ms
Dec 17 22:22:15.343: INFO: Pod "pod-projected-configmaps-0aeddf53-9f29-4153-827f-83191e4b1a97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017353142s
Dec 17 22:22:17.356: INFO: Pod "pod-projected-configmaps-0aeddf53-9f29-4153-827f-83191e4b1a97": Phase="Pending", Reason="", readiness=false. Elapsed: 4.030094167s
Dec 17 22:22:19.365: INFO: Pod "pod-projected-configmaps-0aeddf53-9f29-4153-827f-83191e4b1a97": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039451413s
Dec 17 22:22:21.387: INFO: Pod "pod-projected-configmaps-0aeddf53-9f29-4153-827f-83191e4b1a97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061066722s
STEP: Saw pod success
Dec 17 22:22:21.387: INFO: Pod "pod-projected-configmaps-0aeddf53-9f29-4153-827f-83191e4b1a97" satisfied condition "success or failure"
Dec 17 22:22:21.393: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-0aeddf53-9f29-4153-827f-83191e4b1a97 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 17 22:22:21.454: INFO: Waiting for pod pod-projected-configmaps-0aeddf53-9f29-4153-827f-83191e4b1a97 to disappear
Dec 17 22:22:21.500: INFO: Pod pod-projected-configmaps-0aeddf53-9f29-4153-827f-83191e4b1a97 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:22:21.501: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5133" for this suite.
Dec 17 22:22:27.539: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:22:27.674: INFO: namespace projected-5133 deletion completed in 6.162882938s

• [SLOW TEST:14.492 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:22:27.676: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 22:22:27.839: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5c2c30b-a675-46cf-b1a9-7826a006fd87" in namespace "projected-5457" to be "success or failure"
Dec 17 22:22:27.894: INFO: Pod "downwardapi-volume-b5c2c30b-a675-46cf-b1a9-7826a006fd87": Phase="Pending", Reason="", readiness=false. Elapsed: 54.56366ms
Dec 17 22:22:29.912: INFO: Pod "downwardapi-volume-b5c2c30b-a675-46cf-b1a9-7826a006fd87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072569459s
Dec 17 22:22:31.924: INFO: Pod "downwardapi-volume-b5c2c30b-a675-46cf-b1a9-7826a006fd87": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084550165s
Dec 17 22:22:33.938: INFO: Pod "downwardapi-volume-b5c2c30b-a675-46cf-b1a9-7826a006fd87": Phase="Pending", Reason="", readiness=false. Elapsed: 6.098150001s
Dec 17 22:22:35.965: INFO: Pod "downwardapi-volume-b5c2c30b-a675-46cf-b1a9-7826a006fd87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.125833233s
STEP: Saw pod success
Dec 17 22:22:35.966: INFO: Pod "downwardapi-volume-b5c2c30b-a675-46cf-b1a9-7826a006fd87" satisfied condition "success or failure"
Dec 17 22:22:35.989: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-b5c2c30b-a675-46cf-b1a9-7826a006fd87 container client-container: 
STEP: delete the pod
Dec 17 22:22:36.291: INFO: Waiting for pod downwardapi-volume-b5c2c30b-a675-46cf-b1a9-7826a006fd87 to disappear
Dec 17 22:22:36.325: INFO: Pod downwardapi-volume-b5c2c30b-a675-46cf-b1a9-7826a006fd87 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:22:36.325: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5457" for this suite.
Dec 17 22:22:42.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:22:42.582: INFO: namespace projected-5457 deletion completed in 6.246426963s

• [SLOW TEST:14.906 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[sig-api-machinery] Garbage collector 
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:22:42.583: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the rc
STEP: delete the rc
STEP: wait for the rc to be deleted
STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods
STEP: Gathering metrics
W1217 22:23:23.518616       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 17 22:23:23.518: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:23:23.519: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-9808" for this suite.
Dec 17 22:23:43.562: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:23:43.685: INFO: namespace gc-9808 deletion completed in 20.160459938s

• [SLOW TEST:61.103 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should orphan pods created by rc if delete options say so [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:23:43.687: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute poststart http hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the pod with lifecycle hook
STEP: check poststart hook
STEP: delete the pod with lifecycle hook
Dec 17 22:23:59.971: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 17 22:23:59.976: INFO: Pod pod-with-poststart-http-hook still exists
Dec 17 22:24:01.977: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 17 22:24:01.989: INFO: Pod pod-with-poststart-http-hook still exists
Dec 17 22:24:03.977: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 17 22:24:03.989: INFO: Pod pod-with-poststart-http-hook still exists
Dec 17 22:24:05.977: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 17 22:24:05.986: INFO: Pod pod-with-poststart-http-hook still exists
Dec 17 22:24:07.977: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 17 22:24:07.988: INFO: Pod pod-with-poststart-http-hook still exists
Dec 17 22:24:09.977: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 17 22:24:09.992: INFO: Pod pod-with-poststart-http-hook still exists
Dec 17 22:24:11.977: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 17 22:24:11.984: INFO: Pod pod-with-poststart-http-hook still exists
Dec 17 22:24:13.977: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 17 22:24:13.988: INFO: Pod pod-with-poststart-http-hook still exists
Dec 17 22:24:15.977: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 17 22:24:15.988: INFO: Pod pod-with-poststart-http-hook still exists
Dec 17 22:24:17.977: INFO: Waiting for pod pod-with-poststart-http-hook to disappear
Dec 17 22:24:17.995: INFO: Pod pod-with-poststart-http-hook no longer exists
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:24:17.996: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-7720" for this suite.
Dec 17 22:24:46.102: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:24:46.204: INFO: namespace container-lifecycle-hook-7720 deletion completed in 28.199161111s

• [SLOW TEST:62.517 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute poststart http hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:24:46.205: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name configmap-test-upd-cb38e623-1c8f-4095-a4cf-7e5008f85007
STEP: Creating the pod
STEP: Waiting for pod with text data
STEP: Waiting for pod with binary data
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:24:56.652: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-2849" for this suite.
Dec 17 22:25:24.722: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:25:24.817: INFO: namespace configmap-2849 deletion completed in 28.153392412s

• [SLOW TEST:38.613 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
  binary data should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[sig-cli] Kubectl client Kubectl patch 
  should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:25:24.818: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should add annotations for pods in rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating Redis RC
Dec 17 22:25:24.884: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1006'
Dec 17 22:25:27.500: INFO: stderr: ""
Dec 17 22:25:27.501: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 17 22:25:28.526: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:25:28.527: INFO: Found 0 / 1
Dec 17 22:25:29.510: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:25:29.510: INFO: Found 0 / 1
Dec 17 22:25:30.522: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:25:30.523: INFO: Found 0 / 1
Dec 17 22:25:31.511: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:25:31.511: INFO: Found 0 / 1
Dec 17 22:25:32.515: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:25:32.515: INFO: Found 0 / 1
Dec 17 22:25:33.510: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:25:33.510: INFO: Found 0 / 1
Dec 17 22:25:34.520: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:25:34.521: INFO: Found 0 / 1
Dec 17 22:25:35.512: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:25:35.512: INFO: Found 1 / 1
Dec 17 22:25:35.512: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
STEP: patching all pods
Dec 17 22:25:35.517: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:25:35.517: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 17 22:25:35.517: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod redis-master-9f5ww --namespace=kubectl-1006 -p {"metadata":{"annotations":{"x":"y"}}}'
Dec 17 22:25:35.708: INFO: stderr: ""
Dec 17 22:25:35.709: INFO: stdout: "pod/redis-master-9f5ww patched\n"
STEP: checking annotations
Dec 17 22:25:35.739: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 22:25:35.739: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:25:35.740: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-1006" for this suite.
Dec 17 22:26:03.776: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:26:03.943: INFO: namespace kubectl-1006 deletion completed in 28.196309876s

• [SLOW TEST:39.126 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl patch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1346
    should add annotations for pods in rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with privileged 
  should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:26:03.949: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
[It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 22:26:04.199: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-e390520d-8c8a-43b9-b69b-1faaf1eae4cf" in namespace "security-context-test-3102" to be "success or failure"
Dec 17 22:26:04.259: INFO: Pod "busybox-privileged-false-e390520d-8c8a-43b9-b69b-1faaf1eae4cf": Phase="Pending", Reason="", readiness=false. Elapsed: 59.134042ms
Dec 17 22:26:06.268: INFO: Pod "busybox-privileged-false-e390520d-8c8a-43b9-b69b-1faaf1eae4cf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.068726117s
Dec 17 22:26:08.279: INFO: Pod "busybox-privileged-false-e390520d-8c8a-43b9-b69b-1faaf1eae4cf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079717994s
Dec 17 22:26:10.288: INFO: Pod "busybox-privileged-false-e390520d-8c8a-43b9-b69b-1faaf1eae4cf": Phase="Pending", Reason="", readiness=false. Elapsed: 6.088045992s
Dec 17 22:26:12.303: INFO: Pod "busybox-privileged-false-e390520d-8c8a-43b9-b69b-1faaf1eae4cf": Phase="Running", Reason="", readiness=true. Elapsed: 8.103398941s
Dec 17 22:26:14.317: INFO: Pod "busybox-privileged-false-e390520d-8c8a-43b9-b69b-1faaf1eae4cf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.117586256s
Dec 17 22:26:14.318: INFO: Pod "busybox-privileged-false-e390520d-8c8a-43b9-b69b-1faaf1eae4cf" satisfied condition "success or failure"
Dec 17 22:26:14.359: INFO: Got logs for pod "busybox-privileged-false-e390520d-8c8a-43b9-b69b-1faaf1eae4cf": "ip: RTNETLINK answers: Operation not permitted\n"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:26:14.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-3102" for this suite.
Dec 17 22:26:20.499: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:26:20.620: INFO: namespace security-context-test-3102 deletion completed in 6.251525945s

• [SLOW TEST:16.672 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  When creating a pod with privileged
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:226
    should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:26:20.623: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating secret with name secret-test-686b7f94-2c11-42ae-bcc2-f05b63cbcefe
STEP: Creating a pod to test consume secrets
Dec 17 22:26:20.806: INFO: Waiting up to 5m0s for pod "pod-secrets-a586307a-88fc-4871-b628-1dac247f790a" in namespace "secrets-2415" to be "success or failure"
Dec 17 22:26:20.841: INFO: Pod "pod-secrets-a586307a-88fc-4871-b628-1dac247f790a": Phase="Pending", Reason="", readiness=false. Elapsed: 35.313609ms
Dec 17 22:26:22.857: INFO: Pod "pod-secrets-a586307a-88fc-4871-b628-1dac247f790a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05131857s
Dec 17 22:26:24.866: INFO: Pod "pod-secrets-a586307a-88fc-4871-b628-1dac247f790a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.060105287s
Dec 17 22:26:27.432: INFO: Pod "pod-secrets-a586307a-88fc-4871-b628-1dac247f790a": Phase="Pending", Reason="", readiness=false. Elapsed: 6.626629863s
Dec 17 22:26:29.448: INFO: Pod "pod-secrets-a586307a-88fc-4871-b628-1dac247f790a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.641944313s
STEP: Saw pod success
Dec 17 22:26:29.448: INFO: Pod "pod-secrets-a586307a-88fc-4871-b628-1dac247f790a" satisfied condition "success or failure"
Dec 17 22:26:29.453: INFO: Trying to get logs from node jerma-node pod pod-secrets-a586307a-88fc-4871-b628-1dac247f790a container secret-volume-test: 
STEP: delete the pod
Dec 17 22:26:29.486: INFO: Waiting for pod pod-secrets-a586307a-88fc-4871-b628-1dac247f790a to disappear
Dec 17 22:26:29.491: INFO: Pod pod-secrets-a586307a-88fc-4871-b628-1dac247f790a no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:26:29.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-2415" for this suite.
Dec 17 22:26:35.519: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:26:35.667: INFO: namespace secrets-2415 deletion completed in 6.168704545s

• [SLOW TEST:15.045 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:26:35.668: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 22:26:36.053: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"f84e38a5-e16f-44c2-b5e2-9f1a297bb721", Controller:(*bool)(0xc004a4b592), BlockOwnerDeletion:(*bool)(0xc004a4b593)}}
Dec 17 22:26:36.079: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"9a6ef65f-215d-43c0-aa42-4e3420ed390c", Controller:(*bool)(0xc004a4b782), BlockOwnerDeletion:(*bool)(0xc004a4b783)}}
Dec 17 22:26:36.159: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"2896d9d1-94b0-4735-8a0f-91f49e86e06d", Controller:(*bool)(0xc004a4b92a), BlockOwnerDeletion:(*bool)(0xc004a4b92b)}}
[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:26:41.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-5332" for this suite.
Dec 17 22:26:47.240: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:26:47.369: INFO: namespace gc-5332 deletion completed in 6.161217442s

• [SLOW TEST:11.701 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should not be blocked by dependency circle [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:26:47.371: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should be possible to delete [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:26:47.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-966" for this suite.
Dec 17 22:26:59.703: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:26:59.886: INFO: namespace kubelet-test-966 deletion completed in 12.310167564s

• [SLOW TEST:12.516 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should be possible to delete [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:26:59.888: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Performing setup for networking test in namespace pod-network-test-6750
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 17 22:27:00.007: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 17 22:27:40.401: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.44.0.2:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6750 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 17 22:27:40.402: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 22:27:40.732: INFO: Found all expected endpoints: [netserver-0]
Dec 17 22:27:40.740: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.32.0.4:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-6750 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 17 22:27:40.741: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 22:27:41.015: INFO: Found all expected endpoints: [netserver-1]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:27:41.017: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-6750" for this suite.
Dec 17 22:27:53.054: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:27:53.184: INFO: namespace pod-network-test-6750 deletion completed in 12.158472991s

• [SLOW TEST:53.297 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected secret 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:27:53.185: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating secret with name s-test-opt-del-4ead6c38-b6b0-4912-9ae2-812073be19fd
STEP: Creating secret with name s-test-opt-upd-69b54b31-99a8-4abc-8f66-f54ee48eed67
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-4ead6c38-b6b0-4912-9ae2-812073be19fd
STEP: Updating secret s-test-opt-upd-69b54b31-99a8-4abc-8f66-f54ee48eed67
STEP: Creating secret with name s-test-opt-create-dfc77ae2-5aae-42ef-8cdb-82f318046e06
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:28:05.633: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1485" for this suite.
Dec 17 22:28:33.860: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:28:34.385: INFO: namespace projected-1485 deletion completed in 28.746292178s

• [SLOW TEST:41.201 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:28:34.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating secret with name secret-test-58e229df-6d44-4f34-ba6a-5717a034a656
STEP: Creating a pod to test consume secrets
Dec 17 22:28:34.490: INFO: Waiting up to 5m0s for pod "pod-secrets-e974a5f8-339b-4d1d-94ea-47c0247dd5a3" in namespace "secrets-4432" to be "success or failure"
Dec 17 22:28:34.497: INFO: Pod "pod-secrets-e974a5f8-339b-4d1d-94ea-47c0247dd5a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.397418ms
Dec 17 22:28:36.531: INFO: Pod "pod-secrets-e974a5f8-339b-4d1d-94ea-47c0247dd5a3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040403258s
Dec 17 22:28:38.549: INFO: Pod "pod-secrets-e974a5f8-339b-4d1d-94ea-47c0247dd5a3": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05838658s
Dec 17 22:28:40.565: INFO: Pod "pod-secrets-e974a5f8-339b-4d1d-94ea-47c0247dd5a3": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075041103s
Dec 17 22:28:42.585: INFO: Pod "pod-secrets-e974a5f8-339b-4d1d-94ea-47c0247dd5a3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.094714915s
STEP: Saw pod success
Dec 17 22:28:42.585: INFO: Pod "pod-secrets-e974a5f8-339b-4d1d-94ea-47c0247dd5a3" satisfied condition "success or failure"
Dec 17 22:28:42.598: INFO: Trying to get logs from node jerma-node pod pod-secrets-e974a5f8-339b-4d1d-94ea-47c0247dd5a3 container secret-volume-test: 
STEP: delete the pod
Dec 17 22:28:42.894: INFO: Waiting for pod pod-secrets-e974a5f8-339b-4d1d-94ea-47c0247dd5a3 to disappear
Dec 17 22:28:42.900: INFO: Pod pod-secrets-e974a5f8-339b-4d1d-94ea-47c0247dd5a3 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:28:42.901: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4432" for this suite.
Dec 17 22:28:48.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:28:49.043: INFO: namespace secrets-4432 deletion completed in 6.137012816s

• [SLOW TEST:14.656 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:28:49.044: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 17 22:28:50.557: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 17 22:28:52.593: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712218530, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712218530, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712218530, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712218530, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:28:54.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712218530, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712218530, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712218530, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712218530, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:28:56.611: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712218530, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712218530, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712218530, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712218530, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:28:58.615: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712218530, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712218530, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712218530, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712218530, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 17 22:29:01.698: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Registering the mutating configmap webhook via the AdmissionRegistration API
STEP: create a configmap that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:29:01.816: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-5137" for this suite.
Dec 17 22:29:07.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:29:08.067: INFO: namespace webhook-5137 deletion completed in 6.243356249s
STEP: Destroying namespace "webhook-5137-markers" for this suite.
Dec 17 22:29:14.254: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:29:14.354: INFO: namespace webhook-5137-markers deletion completed in 6.286722047s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:25.325 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate configmap [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a pod with readOnlyRootFilesystem 
  should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:29:14.370: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
[It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 22:29:14.443: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-9006dd37-ea26-4431-a590-2adff95cad57" in namespace "security-context-test-9323" to be "success or failure"
Dec 17 22:29:14.450: INFO: Pod "busybox-readonly-false-9006dd37-ea26-4431-a590-2adff95cad57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.915639ms
Dec 17 22:29:16.463: INFO: Pod "busybox-readonly-false-9006dd37-ea26-4431-a590-2adff95cad57": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019717624s
Dec 17 22:29:18.484: INFO: Pod "busybox-readonly-false-9006dd37-ea26-4431-a590-2adff95cad57": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041097056s
Dec 17 22:29:20.526: INFO: Pod "busybox-readonly-false-9006dd37-ea26-4431-a590-2adff95cad57": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083107129s
Dec 17 22:29:22.543: INFO: Pod "busybox-readonly-false-9006dd37-ea26-4431-a590-2adff95cad57": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.099668046s
Dec 17 22:29:22.543: INFO: Pod "busybox-readonly-false-9006dd37-ea26-4431-a590-2adff95cad57" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:29:22.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-9323" for this suite.
Dec 17 22:29:28.611: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:29:28.748: INFO: namespace security-context-test-9323 deletion completed in 6.186258346s

• [SLOW TEST:14.378 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  When creating a pod with readOnlyRootFilesystem
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:165
    should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[k8s.io] Container Runtime blackbox test on terminated container 
  should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:29:28.749: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the container
STEP: wait for the container to reach Succeeded
STEP: get the container status
STEP: the container should be terminated
STEP: the termination message should be set
Dec 17 22:29:36.816: INFO: Expected: &{DONE} to match Container's Termination Message: DONE --
STEP: delete the container
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:29:36.858: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-5441" for this suite.
Dec 17 22:29:42.949: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:29:43.134: INFO: namespace container-runtime-5441 deletion completed in 6.25865345s

• [SLOW TEST:14.386 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
    on terminated container
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:132
      should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[sig-api-machinery] Servers with support for Table transformation 
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:29:43.135: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename tables
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:47
[It] should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[AfterEach] [sig-api-machinery] Servers with support for Table transformation
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:29:43.229: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "tables-4518" for this suite.
Dec 17 22:29:49.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:29:49.409: INFO: namespace tables-4518 deletion completed in 6.172323968s

• [SLOW TEST:6.275 seconds]
[sig-api-machinery] Servers with support for Table transformation
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should return a 406 for a backend which does not implement metadata [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:29:49.411: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating projection with configMap that has name projected-configmap-test-upd-3ba4b1e2-18ce-4777-9190-723caf2f7754
STEP: Creating the pod
STEP: Updating configmap projected-configmap-test-upd-3ba4b1e2-18ce-4777-9190-723caf2f7754
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:29:59.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1274" for this suite.
Dec 17 22:30:27.940: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:30:28.057: INFO: namespace projected-1274 deletion completed in 28.165256884s

• [SLOW TEST:38.646 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Guestbook application 
  should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:30:28.058: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should create and stop a working application  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating all guestbook components
Dec 17 22:30:28.229: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-slave
  labels:
    app: redis
    role: slave
    tier: backend
spec:
  ports:
  - port: 6379
  selector:
    app: redis
    role: slave
    tier: backend

Dec 17 22:30:28.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7561'
Dec 17 22:30:28.876: INFO: stderr: ""
Dec 17 22:30:28.877: INFO: stdout: "service/redis-slave created\n"
Dec 17 22:30:28.879: INFO: apiVersion: v1
kind: Service
metadata:
  name: redis-master
  labels:
    app: redis
    role: master
    tier: backend
spec:
  ports:
  - port: 6379
    targetPort: 6379
  selector:
    app: redis
    role: master
    tier: backend

Dec 17 22:30:28.879: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7561'
Dec 17 22:30:29.437: INFO: stderr: ""
Dec 17 22:30:29.437: INFO: stdout: "service/redis-master created\n"
Dec 17 22:30:29.438: INFO: apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
  # if your cluster supports it, uncomment the following to automatically create
  # an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  ports:
  - port: 80
  selector:
    app: guestbook
    tier: frontend

Dec 17 22:30:29.438: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7561'
Dec 17 22:30:29.809: INFO: stderr: ""
Dec 17 22:30:29.810: INFO: stdout: "service/frontend created\n"
Dec 17 22:30:29.811: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
      app: guestbook
      tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
        image: gcr.io/google-samples/gb-frontend:v6
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access environment variables to find service host
          # info, comment out the 'value: dns' line above, and uncomment the
          # line below:
          # value: env
        ports:
        - containerPort: 80

Dec 17 22:30:29.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7561'
Dec 17 22:30:30.308: INFO: stderr: ""
Dec 17 22:30:30.308: INFO: stdout: "deployment.apps/frontend created\n"
Dec 17 22:30:30.310: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-master
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
      role: master
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: master
        tier: backend
    spec:
      containers:
      - name: master
        image: docker.io/library/redis:5.0.5-alpine
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

Dec 17 22:30:30.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7561'
Dec 17 22:30:31.041: INFO: stderr: ""
Dec 17 22:30:31.041: INFO: stdout: "deployment.apps/redis-master created\n"
Dec 17 22:30:31.043: INFO: apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-slave
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
      role: slave
      tier: backend
  template:
    metadata:
      labels:
        app: redis
        role: slave
        tier: backend
    spec:
      containers:
      - name: slave
        image: docker.io/library/redis:5.0.5-alpine
        # We are only implementing the dns option of:
        # https://github.com/kubernetes/examples/blob/97c7ed0eb6555a4b667d2877f965d392e00abc45/guestbook/redis-slave/run.sh
        command: [ "redis-server", "--slaveof", "redis-master", "6379" ]
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        env:
        - name: GET_HOSTS_FROM
          value: dns
          # If your cluster config does not include a dns service, then to
          # instead access an environment variable to find the master
          # service's host, comment out the 'value: dns' line above, and
          # uncomment the line below:
          # value: env
        ports:
        - containerPort: 6379

Dec 17 22:30:31.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7561'
Dec 17 22:30:31.569: INFO: stderr: ""
Dec 17 22:30:31.569: INFO: stdout: "deployment.apps/redis-slave created\n"
STEP: validating guestbook app
Dec 17 22:30:31.569: INFO: Waiting for all frontend pods to be Running.
Dec 17 22:30:56.624: INFO: Waiting for frontend to serve content.
Dec 17 22:30:56.686: INFO: Trying to add a new entry to the guestbook.
Dec 17 22:30:56.724: INFO: Verifying that added entry can be retrieved.
STEP: using delete to clean up resources
Dec 17 22:30:56.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7561'
Dec 17 22:30:57.034: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 22:30:57.035: INFO: stdout: "service \"redis-slave\" force deleted\n"
STEP: using delete to clean up resources
Dec 17 22:30:57.037: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7561'
Dec 17 22:30:57.443: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 22:30:57.443: INFO: stdout: "service \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 17 22:30:57.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7561'
Dec 17 22:30:57.589: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 22:30:57.589: INFO: stdout: "service \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 17 22:30:57.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7561'
Dec 17 22:30:57.753: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 22:30:57.753: INFO: stdout: "deployment.apps \"frontend\" force deleted\n"
STEP: using delete to clean up resources
Dec 17 22:30:57.754: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7561'
Dec 17 22:30:57.905: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 22:30:57.905: INFO: stdout: "deployment.apps \"redis-master\" force deleted\n"
STEP: using delete to clean up resources
Dec 17 22:30:57.906: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-7561'
Dec 17 22:30:58.126: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 22:30:58.126: INFO: stdout: "deployment.apps \"redis-slave\" force deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:30:58.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7561" for this suite.
Dec 17 22:31:29.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:31:29.672: INFO: namespace kubectl-7561 deletion completed in 31.506625867s

• [SLOW TEST:61.614 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Guestbook application
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:333
    should create and stop a working application  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:31:29.673: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating replication controller my-hostname-basic-9cddeb09-d9d7-4ed9-9485-7a50cffb8d5d
Dec 17 22:31:29.832: INFO: Pod name my-hostname-basic-9cddeb09-d9d7-4ed9-9485-7a50cffb8d5d: Found 0 pods out of 1
Dec 17 22:31:34.841: INFO: Pod name my-hostname-basic-9cddeb09-d9d7-4ed9-9485-7a50cffb8d5d: Found 1 pods out of 1
Dec 17 22:31:34.841: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-9cddeb09-d9d7-4ed9-9485-7a50cffb8d5d" are running
Dec 17 22:31:38.941: INFO: Pod "my-hostname-basic-9cddeb09-d9d7-4ed9-9485-7a50cffb8d5d-rb6kc" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 22:31:29 +0000 UTC Reason: Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 22:31:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9cddeb09-d9d7-4ed9-9485-7a50cffb8d5d]} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 22:31:29 +0000 UTC Reason:ContainersNotReady Message:containers with unready status: [my-hostname-basic-9cddeb09-d9d7-4ed9-9485-7a50cffb8d5d]} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2019-12-17 22:31:29 +0000 UTC Reason: Message:}])
Dec 17 22:31:38.942: INFO: Trying to dial the pod
Dec 17 22:31:44.046: INFO: Controller my-hostname-basic-9cddeb09-d9d7-4ed9-9485-7a50cffb8d5d: Got expected result from replica 1 [my-hostname-basic-9cddeb09-d9d7-4ed9-9485-7a50cffb8d5d-rb6kc]: "my-hostname-basic-9cddeb09-d9d7-4ed9-9485-7a50cffb8d5d-rb6kc", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:31:44.047: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9607" for this suite.
Dec 17 22:31:50.125: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:31:50.227: INFO: namespace replication-controller-9607 deletion completed in 6.173678892s

• [SLOW TEST:20.554 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:31:50.228: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicationController
STEP: Ensuring resource quota status captures replication controller creation
STEP: Deleting a ReplicationController
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:32:01.404: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-7722" for this suite.
Dec 17 22:32:07.435: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:32:07.563: INFO: namespace resourcequota-7722 deletion completed in 6.15236766s

• [SLOW TEST:17.336 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replication controller. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[sig-storage] Downward API volume 
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:32:07.565: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating the pod
Dec 17 22:32:16.550: INFO: Successfully updated pod "annotationupdatea4bcd81d-cea7-45da-8e06-aac92d0c64b0"
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:32:18.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-8068" for this suite.
Dec 17 22:32:46.651: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:32:46.806: INFO: namespace downward-api-8068 deletion completed in 28.187007203s

• [SLOW TEST:39.242 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should update annotations on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:32:46.807: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 22:32:46.944: INFO: Waiting up to 5m0s for pod "downwardapi-volume-97dbbe75-e3cc-4e4f-a83b-fbe848c1c380" in namespace "projected-7660" to be "success or failure"
Dec 17 22:32:46.955: INFO: Pod "downwardapi-volume-97dbbe75-e3cc-4e4f-a83b-fbe848c1c380": Phase="Pending", Reason="", readiness=false. Elapsed: 10.788768ms
Dec 17 22:32:48.968: INFO: Pod "downwardapi-volume-97dbbe75-e3cc-4e4f-a83b-fbe848c1c380": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023624526s
Dec 17 22:32:50.984: INFO: Pod "downwardapi-volume-97dbbe75-e3cc-4e4f-a83b-fbe848c1c380": Phase="Pending", Reason="", readiness=false. Elapsed: 4.040125117s
Dec 17 22:32:52.993: INFO: Pod "downwardapi-volume-97dbbe75-e3cc-4e4f-a83b-fbe848c1c380": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048893812s
Dec 17 22:32:55.018: INFO: Pod "downwardapi-volume-97dbbe75-e3cc-4e4f-a83b-fbe848c1c380": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074157491s
STEP: Saw pod success
Dec 17 22:32:55.018: INFO: Pod "downwardapi-volume-97dbbe75-e3cc-4e4f-a83b-fbe848c1c380" satisfied condition "success or failure"
Dec 17 22:32:55.022: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-97dbbe75-e3cc-4e4f-a83b-fbe848c1c380 container client-container: 
STEP: delete the pod
Dec 17 22:32:55.051: INFO: Waiting for pod downwardapi-volume-97dbbe75-e3cc-4e4f-a83b-fbe848c1c380 to disappear
Dec 17 22:32:55.055: INFO: Pod downwardapi-volume-97dbbe75-e3cc-4e4f-a83b-fbe848c1c380 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:32:55.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7660" for this suite.
Dec 17 22:33:01.091: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:33:01.209: INFO: namespace projected-7660 deletion completed in 6.143283687s

• [SLOW TEST:14.402 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:33:01.209: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: starting a background goroutine to produce watch events
STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:33:05.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-7114" for this suite.
Dec 17 22:33:12.178: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:33:12.293: INFO: namespace watch-7114 deletion completed in 6.271893693s

• [SLOW TEST:11.084 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should receive events on concurrent watches in same order [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl run job 
  should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:33:12.294: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1595
[It] should create a job from an image when restart is OnFailure  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 17 22:33:12.356: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-5482'
Dec 17 22:33:12.500: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 17 22:33:12.500: INFO: stdout: "job.batch/e2e-test-httpd-job created\n"
STEP: verifying the job e2e-test-httpd-job was created
[AfterEach] Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1600
Dec 17 22:33:12.576: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-5482'
Dec 17 22:33:12.698: INFO: stderr: ""
Dec 17 22:33:12.698: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:33:12.699: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-5482" for this suite.
Dec 17 22:33:18.747: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:33:18.853: INFO: namespace kubectl-5482 deletion completed in 6.13284989s

• [SLOW TEST:6.559 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591
    should create a job from an image when restart is OnFailure  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:33:18.856: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name projected-configmap-test-volume-67767725-4736-45f9-83df-3942bc735f5e
STEP: Creating a pod to test consume configMaps
Dec 17 22:33:18.929: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-48c514f9-3296-457e-855c-c2025dc50f7c" in namespace "projected-8958" to be "success or failure"
Dec 17 22:33:18.934: INFO: Pod "pod-projected-configmaps-48c514f9-3296-457e-855c-c2025dc50f7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.214168ms
Dec 17 22:33:20.951: INFO: Pod "pod-projected-configmaps-48c514f9-3296-457e-855c-c2025dc50f7c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021188355s
Dec 17 22:33:22.960: INFO: Pod "pod-projected-configmaps-48c514f9-3296-457e-855c-c2025dc50f7c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031034326s
Dec 17 22:33:24.971: INFO: Pod "pod-projected-configmaps-48c514f9-3296-457e-855c-c2025dc50f7c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041532671s
Dec 17 22:33:26.979: INFO: Pod "pod-projected-configmaps-48c514f9-3296-457e-855c-c2025dc50f7c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.049311868s
Dec 17 22:33:28.993: INFO: Pod "pod-projected-configmaps-48c514f9-3296-457e-855c-c2025dc50f7c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.063154733s
STEP: Saw pod success
Dec 17 22:33:28.993: INFO: Pod "pod-projected-configmaps-48c514f9-3296-457e-855c-c2025dc50f7c" satisfied condition "success or failure"
Dec 17 22:33:28.998: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-48c514f9-3296-457e-855c-c2025dc50f7c container projected-configmap-volume-test: 
STEP: delete the pod
Dec 17 22:33:29.027: INFO: Waiting for pod pod-projected-configmaps-48c514f9-3296-457e-855c-c2025dc50f7c to disappear
Dec 17 22:33:29.096: INFO: Pod pod-projected-configmaps-48c514f9-3296-457e-855c-c2025dc50f7c no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:33:29.097: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8958" for this suite.
Dec 17 22:33:35.129: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:33:35.266: INFO: namespace projected-8958 deletion completed in 6.164139854s

• [SLOW TEST:16.411 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:33:35.269: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name configmap-test-volume-map-4d49f826-80fc-4259-ad26-48886fe86f72
STEP: Creating a pod to test consume configMaps
Dec 17 22:33:35.393: INFO: Waiting up to 5m0s for pod "pod-configmaps-e430ad10-fee0-4fde-8a5b-eee6e39bde51" in namespace "configmap-368" to be "success or failure"
Dec 17 22:33:35.423: INFO: Pod "pod-configmaps-e430ad10-fee0-4fde-8a5b-eee6e39bde51": Phase="Pending", Reason="", readiness=false. Elapsed: 30.401403ms
Dec 17 22:33:37.435: INFO: Pod "pod-configmaps-e430ad10-fee0-4fde-8a5b-eee6e39bde51": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041847922s
Dec 17 22:33:39.459: INFO: Pod "pod-configmaps-e430ad10-fee0-4fde-8a5b-eee6e39bde51": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065799986s
Dec 17 22:33:41.489: INFO: Pod "pod-configmaps-e430ad10-fee0-4fde-8a5b-eee6e39bde51": Phase="Pending", Reason="", readiness=false. Elapsed: 6.096418845s
Dec 17 22:33:43.499: INFO: Pod "pod-configmaps-e430ad10-fee0-4fde-8a5b-eee6e39bde51": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.105677698s
STEP: Saw pod success
Dec 17 22:33:43.499: INFO: Pod "pod-configmaps-e430ad10-fee0-4fde-8a5b-eee6e39bde51" satisfied condition "success or failure"
Dec 17 22:33:43.503: INFO: Trying to get logs from node jerma-node pod pod-configmaps-e430ad10-fee0-4fde-8a5b-eee6e39bde51 container configmap-volume-test: 
STEP: delete the pod
Dec 17 22:33:43.552: INFO: Waiting for pod pod-configmaps-e430ad10-fee0-4fde-8a5b-eee6e39bde51 to disappear
Dec 17 22:33:43.573: INFO: Pod pod-configmaps-e430ad10-fee0-4fde-8a5b-eee6e39bde51 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:33:43.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-368" for this suite.
Dec 17 22:33:49.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:33:49.752: INFO: namespace configmap-368 deletion completed in 6.169649612s

• [SLOW TEST:14.483 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Probing container 
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:33:49.754: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
[It] should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating pod liveness-be487d7e-ce19-4dfe-b491-d054f2131499 in namespace container-probe-8261
Dec 17 22:33:59.902: INFO: Started pod liveness-be487d7e-ce19-4dfe-b491-d054f2131499 in namespace container-probe-8261
STEP: checking the pod's current state and verifying that restartCount is present
Dec 17 22:33:59.906: INFO: Initial restart count of pod liveness-be487d7e-ce19-4dfe-b491-d054f2131499 is 0
Dec 17 22:34:14.285: INFO: Restart count of pod container-probe-8261/liveness-be487d7e-ce19-4dfe-b491-d054f2131499 is now 1 (14.378649651s elapsed)
Dec 17 22:34:32.439: INFO: Restart count of pod container-probe-8261/liveness-be487d7e-ce19-4dfe-b491-d054f2131499 is now 2 (32.532931247s elapsed)
Dec 17 22:34:52.582: INFO: Restart count of pod container-probe-8261/liveness-be487d7e-ce19-4dfe-b491-d054f2131499 is now 3 (52.675585541s elapsed)
Dec 17 22:35:12.781: INFO: Restart count of pod container-probe-8261/liveness-be487d7e-ce19-4dfe-b491-d054f2131499 is now 4 (1m12.874232551s elapsed)
Dec 17 22:36:13.162: INFO: Restart count of pod container-probe-8261/liveness-be487d7e-ce19-4dfe-b491-d054f2131499 is now 5 (2m13.255345747s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:36:13.202: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-8261" for this suite.
Dec 17 22:36:19.269: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:36:19.446: INFO: namespace container-probe-8261 deletion completed in 6.23239507s

• [SLOW TEST:149.692 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should have monotonically increasing restart count [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:36:19.448: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test override arguments
Dec 17 22:36:19.583: INFO: Waiting up to 5m0s for pod "client-containers-ae5c626e-f4ec-4e1f-b9af-7f39bda4a7a9" in namespace "containers-154" to be "success or failure"
Dec 17 22:36:19.603: INFO: Pod "client-containers-ae5c626e-f4ec-4e1f-b9af-7f39bda4a7a9": Phase="Pending", Reason="", readiness=false. Elapsed: 20.382664ms
Dec 17 22:36:21.610: INFO: Pod "client-containers-ae5c626e-f4ec-4e1f-b9af-7f39bda4a7a9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027154855s
Dec 17 22:36:23.624: INFO: Pod "client-containers-ae5c626e-f4ec-4e1f-b9af-7f39bda4a7a9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041574027s
Dec 17 22:36:25.635: INFO: Pod "client-containers-ae5c626e-f4ec-4e1f-b9af-7f39bda4a7a9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051765423s
Dec 17 22:36:27.642: INFO: Pod "client-containers-ae5c626e-f4ec-4e1f-b9af-7f39bda4a7a9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059564025s
STEP: Saw pod success
Dec 17 22:36:27.643: INFO: Pod "client-containers-ae5c626e-f4ec-4e1f-b9af-7f39bda4a7a9" satisfied condition "success or failure"
Dec 17 22:36:27.645: INFO: Trying to get logs from node jerma-node pod client-containers-ae5c626e-f4ec-4e1f-b9af-7f39bda4a7a9 container test-container: 
STEP: delete the pod
Dec 17 22:36:27.743: INFO: Waiting for pod client-containers-ae5c626e-f4ec-4e1f-b9af-7f39bda4a7a9 to disappear
Dec 17 22:36:27.866: INFO: Pod client-containers-ae5c626e-f4ec-4e1f-b9af-7f39bda4a7a9 no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:36:27.866: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-154" for this suite.
Dec 17 22:36:33.913: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:36:34.063: INFO: namespace containers-154 deletion completed in 6.185608466s

• [SLOW TEST:14.616 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Watchers 
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:36:34.066: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating a watch on configmaps with a certain label
STEP: creating a new configmap
STEP: modifying the configmap once
STEP: changing the label value of the configmap
STEP: Expecting to observe a delete notification for the watched object
Dec 17 22:36:34.195: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2727 /api/v1/namespaces/watch-2727/configmaps/e2e-watch-test-label-changed 0d6c253a-ea62-4041-b6d6-29086362da13 9151871 0 2019-12-17 22:36:34 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{},BinaryData:map[string][]byte{},}
Dec 17 22:36:34.196: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2727 /api/v1/namespaces/watch-2727/configmaps/e2e-watch-test-label-changed 0d6c253a-ea62-4041-b6d6-29086362da13 9151872 0 2019-12-17 22:36:34 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
Dec 17 22:36:34.196: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2727 /api/v1/namespaces/watch-2727/configmaps/e2e-watch-test-label-changed 0d6c253a-ea62-4041-b6d6-29086362da13 9151873 0 2019-12-17 22:36:34 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},}
STEP: modifying the configmap a second time
STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements
STEP: changing the label value of the configmap back
STEP: modifying the configmap a third time
STEP: deleting the configmap
STEP: Expecting to observe an add notification for the watched object when the label value was restored
Dec 17 22:36:44.257: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2727 /api/v1/namespaces/watch-2727/configmaps/e2e-watch-test-label-changed 0d6c253a-ea62-4041-b6d6-29086362da13 9151888 0 2019-12-17 22:36:34 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},}
Dec 17 22:36:44.258: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2727 /api/v1/namespaces/watch-2727/configmaps/e2e-watch-test-label-changed 0d6c253a-ea62-4041-b6d6-29086362da13 9151889 0 2019-12-17 22:36:34 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
Dec 17 22:36:44.258: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed  watch-2727 /api/v1/namespaces/watch-2727/configmaps/e2e-watch-test-label-changed 0d6c253a-ea62-4041-b6d6-29086362da13 9151890 0 2019-12-17 22:36:34 +0000 UTC   map[watch-this-configmap:label-changed-and-restored] map[] [] []  []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},}
[AfterEach] [sig-api-machinery] Watchers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:36:44.258: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "watch-2727" for this suite.
Dec 17 22:36:50.352: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:36:50.542: INFO: namespace watch-2727 deletion completed in 6.274630559s

• [SLOW TEST:16.476 seconds]
[sig-api-machinery] Watchers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should observe an object deletion if it stops meeting the requirements of the selector [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Secrets 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:36:50.543: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating secret with name s-test-opt-del-41b14d0f-ff43-4f57-87d8-6a011c189527
STEP: Creating secret with name s-test-opt-upd-d8a81444-7df8-4d68-b932-2b882d52801c
STEP: Creating the pod
STEP: Deleting secret s-test-opt-del-41b14d0f-ff43-4f57-87d8-6a011c189527
STEP: Updating secret s-test-opt-upd-d8a81444-7df8-4d68-b932-2b882d52801c
STEP: Creating secret with name s-test-opt-create-fac3c425-df47-4563-bd44-35e18b837fe3
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:38:20.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6987" for this suite.
Dec 17 22:38:48.064: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:38:48.196: INFO: namespace secrets-6987 deletion completed in 28.164147041s

• [SLOW TEST:117.654 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] PreStop 
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:38:48.197: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename prestop
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:173
[It] should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating server pod server in namespace prestop-4780
STEP: Waiting for pods to come up.
STEP: Creating tester pod tester in namespace prestop-4780
STEP: Deleting pre-stop pod
Dec 17 22:39:09.470: INFO: Saw: {
	"Hostname": "server",
	"Sent": null,
	"Received": {
		"prestop": 1
	},
	"Errors": null,
	"Log": [
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.",
		"default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up."
	],
	"StillContactingPeers": true
}
STEP: Deleting the server pod
[AfterEach] [k8s.io] [sig-node] PreStop
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:39:09.491: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "prestop-4780" for this suite.
Dec 17 22:39:49.543: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:39:49.662: INFO: namespace prestop-4780 deletion completed in 40.145782228s

• [SLOW TEST:61.465 seconds]
[k8s.io] [sig-node] PreStop
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should call prestop when killing a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:39:49.664: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 22:39:49.744: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with any unknown properties
Dec 17 22:39:53.789: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4760 create -f -'
Dec 17 22:39:57.212: INFO: stderr: ""
Dec 17 22:39:57.212: INFO: stdout: "e2e-test-crd-publish-openapi-1472-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Dec 17 22:39:57.212: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4760 delete e2e-test-crd-publish-openapi-1472-crds test-cr'
Dec 17 22:39:57.481: INFO: stderr: ""
Dec 17 22:39:57.481: INFO: stdout: "e2e-test-crd-publish-openapi-1472-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
Dec 17 22:39:57.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4760 apply -f -'
Dec 17 22:39:57.978: INFO: stderr: ""
Dec 17 22:39:57.978: INFO: stdout: "e2e-test-crd-publish-openapi-1472-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n"
Dec 17 22:39:57.978: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-4760 delete e2e-test-crd-publish-openapi-1472-crds test-cr'
Dec 17 22:39:58.149: INFO: stderr: ""
Dec 17 22:39:58.150: INFO: stdout: "e2e-test-crd-publish-openapi-1472-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n"
STEP: kubectl explain works to explain CR
Dec 17 22:39:58.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-1472-crds'
Dec 17 22:39:58.664: INFO: stderr: ""
Dec 17 22:39:58.665: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-1472-crd\nVERSION:  crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n     preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Waldo\n\n   status\t\n     Status of Waldo\n\n"
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:40:03.289: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-4760" for this suite.
Dec 17 22:40:09.317: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:40:09.458: INFO: namespace crd-publish-openapi-4760 deletion completed in 6.163682613s

• [SLOW TEST:19.795 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD preserving unknown fields in an embedded object [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:40:09.459: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating secret with name secret-test-map-b2f2f2ca-e7ce-4251-bf2b-b6f34a203362
STEP: Creating a pod to test consume secrets
Dec 17 22:40:09.609: INFO: Waiting up to 5m0s for pod "pod-secrets-e05cdcde-89c3-4a6c-a6b8-d6aa19923538" in namespace "secrets-1477" to be "success or failure"
Dec 17 22:40:09.618: INFO: Pod "pod-secrets-e05cdcde-89c3-4a6c-a6b8-d6aa19923538": Phase="Pending", Reason="", readiness=false. Elapsed: 8.604444ms
Dec 17 22:40:11.636: INFO: Pod "pod-secrets-e05cdcde-89c3-4a6c-a6b8-d6aa19923538": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027217618s
Dec 17 22:40:13.648: INFO: Pod "pod-secrets-e05cdcde-89c3-4a6c-a6b8-d6aa19923538": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039051659s
Dec 17 22:40:15.698: INFO: Pod "pod-secrets-e05cdcde-89c3-4a6c-a6b8-d6aa19923538": Phase="Pending", Reason="", readiness=false. Elapsed: 6.089348603s
Dec 17 22:40:17.710: INFO: Pod "pod-secrets-e05cdcde-89c3-4a6c-a6b8-d6aa19923538": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.101216107s
STEP: Saw pod success
Dec 17 22:40:17.711: INFO: Pod "pod-secrets-e05cdcde-89c3-4a6c-a6b8-d6aa19923538" satisfied condition "success or failure"
Dec 17 22:40:17.716: INFO: Trying to get logs from node jerma-node pod pod-secrets-e05cdcde-89c3-4a6c-a6b8-d6aa19923538 container secret-volume-test: 
STEP: delete the pod
Dec 17 22:40:18.118: INFO: Waiting for pod pod-secrets-e05cdcde-89c3-4a6c-a6b8-d6aa19923538 to disappear
Dec 17 22:40:18.176: INFO: Pod pod-secrets-e05cdcde-89c3-4a6c-a6b8-d6aa19923538 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:40:18.177: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1477" for this suite.
Dec 17 22:40:24.242: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:40:24.362: INFO: namespace secrets-1477 deletion completed in 6.180048539s

• [SLOW TEST:14.903 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSS
------------------------------
[sig-api-machinery] Aggregator 
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:40:24.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename aggregator
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:77
Dec 17 22:40:24.493: INFO: >>> kubeConfig: /root/.kube/config
[It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Registering the sample API server.
Dec 17 22:40:25.210: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set
Dec 17 22:40:27.525: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:40:29.610: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:40:31.535: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:40:33.535: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:40:35.534: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219225, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-8447597c78\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:40:38.288: INFO: Waited 736.357307ms for the sample-apiserver to be ready to handle requests.
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:68
[AfterEach] [sig-api-machinery] Aggregator
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:40:39.295: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "aggregator-3644" for this suite.
Dec 17 22:40:45.325: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:40:45.468: INFO: namespace aggregator-3644 deletion completed in 6.169106768s

• [SLOW TEST:21.106 seconds]
[sig-api-machinery] Aggregator
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[sig-storage] ConfigMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:40:45.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name cm-test-opt-del-1ae9c1b9-4222-4366-ad77-5a7ffc217bf8
STEP: Creating configMap with name cm-test-opt-upd-64217c4e-dd62-4d33-a0b3-129bbe98e835
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-1ae9c1b9-4222-4366-ad77-5a7ffc217bf8
STEP: Updating configmap cm-test-opt-upd-64217c4e-dd62-4d33-a0b3-129bbe98e835
STEP: Creating configMap with name cm-test-opt-create-b34c39e3-e042-448b-89b5-4d18a7716763
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:42:00.547: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-6868" for this suite.
Dec 17 22:42:12.615: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:42:12.773: INFO: namespace configmap-6868 deletion completed in 12.212177939s

• [SLOW TEST:87.305 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Container Runtime blackbox test when starting a container that exits 
  should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:42:12.776: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-runtime
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run with the expected status [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpa': should get the expected 'State'
STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpof': should get the expected 'State'
STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance]
STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase'
STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition
STEP: Container 'terminate-cmd-rpn': should get the expected 'State'
STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance]
[AfterEach] [k8s.io] Container Runtime
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:43:04.221: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-runtime-8379" for this suite.
Dec 17 22:43:10.268: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:43:10.402: INFO: namespace container-runtime-8379 deletion completed in 6.169309857s

• [SLOW TEST:57.627 seconds]
[k8s.io] Container Runtime
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  blackbox test
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39
    when starting a container that exits
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:40
      should run with the expected status [NodeConformance] [Conformance]
      /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl rolling-update 
  should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:43:10.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1499
[It] should support rolling-update to same image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: running the image docker.io/library/httpd:2.4.38-alpine
Dec 17 22:43:10.606: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-7168'
Dec 17 22:43:10.733: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n"
Dec 17 22:43:10.733: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n"
STEP: verifying the rc e2e-test-httpd-rc was created
Dec 17 22:43:10.753: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
STEP: rolling-update to same image controller
Dec 17 22:43:10.763: INFO: scanned /root for discovery docs: 
Dec 17 22:43:10.763: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-7168'
Dec 17 22:43:31.967: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n"
Dec 17 22:43:31.968: INFO: stdout: "Created e2e-test-httpd-rc-12f79fa1e3536de901ef8db7cd399f69\nScaling up e2e-test-httpd-rc-12f79fa1e3536de901ef8db7cd399f69 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-12f79fa1e3536de901ef8db7cd399f69 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-12f79fa1e3536de901ef8db7cd399f69 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
Dec 17 22:43:31.968: INFO: stdout: "Created e2e-test-httpd-rc-12f79fa1e3536de901ef8db7cd399f69\nScaling up e2e-test-httpd-rc-12f79fa1e3536de901ef8db7cd399f69 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-12f79fa1e3536de901ef8db7cd399f69 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-12f79fa1e3536de901ef8db7cd399f69 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n"
STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up.
Dec 17 22:43:31.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7168'
Dec 17 22:43:32.186: INFO: stderr: ""
Dec 17 22:43:32.186: INFO: stdout: "e2e-test-httpd-rc-12f79fa1e3536de901ef8db7cd399f69-kd9lx e2e-test-httpd-rc-tnnlx "
STEP: Replicas for run=e2e-test-httpd-rc: expected=1 actual=2
Dec 17 22:43:37.188: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-7168'
Dec 17 22:43:37.311: INFO: stderr: ""
Dec 17 22:43:37.312: INFO: stdout: "e2e-test-httpd-rc-12f79fa1e3536de901ef8db7cd399f69-kd9lx "
Dec 17 22:43:37.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-12f79fa1e3536de901ef8db7cd399f69-kd9lx -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-7168'
Dec 17 22:43:37.432: INFO: stderr: ""
Dec 17 22:43:37.432: INFO: stdout: "true"
Dec 17 22:43:37.432: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-12f79fa1e3536de901ef8db7cd399f69-kd9lx -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-7168'
Dec 17 22:43:37.563: INFO: stderr: ""
Dec 17 22:43:37.563: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine"
Dec 17 22:43:37.563: INFO: e2e-test-httpd-rc-12f79fa1e3536de901ef8db7cd399f69-kd9lx is verified up and running
[AfterEach] Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1505
Dec 17 22:43:37.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-7168'
Dec 17 22:43:37.713: INFO: stderr: ""
Dec 17 22:43:37.713: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:43:37.714: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-7168" for this suite.
Dec 17 22:43:49.765: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:43:49.965: INFO: namespace kubectl-7168 deletion completed in 12.23911793s

• [SLOW TEST:39.560 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl rolling-update
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1494
    should support rolling-update to same image  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:43:49.966: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name configmap-test-volume-d12cbd24-03c5-489e-bbdf-fd345867d519
STEP: Creating a pod to test consume configMaps
Dec 17 22:43:50.073: INFO: Waiting up to 5m0s for pod "pod-configmaps-406b5cff-13f9-43c8-b34d-c648155bed2e" in namespace "configmap-4396" to be "success or failure"
Dec 17 22:43:50.083: INFO: Pod "pod-configmaps-406b5cff-13f9-43c8-b34d-c648155bed2e": Phase="Pending", Reason="", readiness=false. Elapsed: 10.437014ms
Dec 17 22:43:52.102: INFO: Pod "pod-configmaps-406b5cff-13f9-43c8-b34d-c648155bed2e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028665624s
Dec 17 22:43:54.108: INFO: Pod "pod-configmaps-406b5cff-13f9-43c8-b34d-c648155bed2e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034986751s
Dec 17 22:43:56.121: INFO: Pod "pod-configmaps-406b5cff-13f9-43c8-b34d-c648155bed2e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.048221015s
Dec 17 22:43:58.133: INFO: Pod "pod-configmaps-406b5cff-13f9-43c8-b34d-c648155bed2e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059875616s
STEP: Saw pod success
Dec 17 22:43:58.133: INFO: Pod "pod-configmaps-406b5cff-13f9-43c8-b34d-c648155bed2e" satisfied condition "success or failure"
Dec 17 22:43:58.139: INFO: Trying to get logs from node jerma-node pod pod-configmaps-406b5cff-13f9-43c8-b34d-c648155bed2e container configmap-volume-test: 
STEP: delete the pod
Dec 17 22:43:58.218: INFO: Waiting for pod pod-configmaps-406b5cff-13f9-43c8-b34d-c648155bed2e to disappear
Dec 17 22:43:58.228: INFO: Pod pod-configmaps-406b5cff-13f9-43c8-b34d-c648155bed2e no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:43:58.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-4396" for this suite.
Dec 17 22:44:04.273: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:44:04.419: INFO: namespace configmap-4396 deletion completed in 6.184817461s

• [SLOW TEST:14.454 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:44:04.422: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 22:44:04.519: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2f29f491-2fa3-4a5a-8853-b85ea81a4ab9" in namespace "downward-api-1772" to be "success or failure"
Dec 17 22:44:04.605: INFO: Pod "downwardapi-volume-2f29f491-2fa3-4a5a-8853-b85ea81a4ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 85.588893ms
Dec 17 22:44:06.645: INFO: Pod "downwardapi-volume-2f29f491-2fa3-4a5a-8853-b85ea81a4ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125755532s
Dec 17 22:44:08.661: INFO: Pod "downwardapi-volume-2f29f491-2fa3-4a5a-8853-b85ea81a4ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.142103827s
Dec 17 22:44:10.672: INFO: Pod "downwardapi-volume-2f29f491-2fa3-4a5a-8853-b85ea81a4ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.152345476s
Dec 17 22:44:12.682: INFO: Pod "downwardapi-volume-2f29f491-2fa3-4a5a-8853-b85ea81a4ab9": Phase="Pending", Reason="", readiness=false. Elapsed: 8.162760074s
Dec 17 22:44:14.695: INFO: Pod "downwardapi-volume-2f29f491-2fa3-4a5a-8853-b85ea81a4ab9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.175860811s
STEP: Saw pod success
Dec 17 22:44:14.695: INFO: Pod "downwardapi-volume-2f29f491-2fa3-4a5a-8853-b85ea81a4ab9" satisfied condition "success or failure"
Dec 17 22:44:14.702: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-2f29f491-2fa3-4a5a-8853-b85ea81a4ab9 container client-container: 
STEP: delete the pod
Dec 17 22:44:14.806: INFO: Waiting for pod downwardapi-volume-2f29f491-2fa3-4a5a-8853-b85ea81a4ab9 to disappear
Dec 17 22:44:14.824: INFO: Pod downwardapi-volume-2f29f491-2fa3-4a5a-8853-b85ea81a4ab9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:44:14.824: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1772" for this suite.
Dec 17 22:44:20.848: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:44:20.978: INFO: namespace downward-api-1772 deletion completed in 6.149026097s

• [SLOW TEST:16.556 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:44:20.979: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
[It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating pod liveness-58ac4c6a-7fc3-47b6-87b6-920a48b696fa in namespace container-probe-7800
Dec 17 22:44:29.192: INFO: Started pod liveness-58ac4c6a-7fc3-47b6-87b6-920a48b696fa in namespace container-probe-7800
STEP: checking the pod's current state and verifying that restartCount is present
Dec 17 22:44:29.197: INFO: Initial restart count of pod liveness-58ac4c6a-7fc3-47b6-87b6-920a48b696fa is 0
Dec 17 22:44:51.324: INFO: Restart count of pod container-probe-7800/liveness-58ac4c6a-7fc3-47b6-87b6-920a48b696fa is now 1 (22.126753729s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:44:51.379: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-7800" for this suite.
Dec 17 22:44:57.425: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:44:57.692: INFO: namespace container-probe-7800 deletion completed in 6.297393002s

• [SLOW TEST:36.714 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:44:57.694: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
[It] should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the pod
STEP: setting up watch
STEP: submitting the pod to kubernetes
Dec 17 22:44:57.836: INFO: observed the pod list
STEP: verifying the pod is in kubernetes
STEP: verifying pod creation was observed
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
STEP: verifying pod deletion was observed
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:45:16.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-1257" for this suite.
Dec 17 22:45:22.700: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:45:22.831: INFO: namespace pods-1257 deletion completed in 6.152986657s

• [SLOW TEST:25.138 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should be submitted and removed [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:45:22.833: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 17 22:45:23.672: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 17 22:45:25.697: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219523, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219523, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219523, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219523, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:45:27.707: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219523, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219523, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219523, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219523, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:45:29.751: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219523, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219523, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219523, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219523, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:45:31.708: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219523, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219523, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219523, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219523, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 17 22:45:34.825: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Registering the webhook via the AdmissionRegistration API
STEP: create a pod that should be denied by the webhook
STEP: create a pod that causes the webhook to hang
STEP: create a configmap that should be denied by the webhook
STEP: create a configmap that should be admitted by the webhook
STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook
STEP: create a namespace that bypass the webhook
STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:45:45.167: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3274" for this suite.
Dec 17 22:45:53.219: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:45:53.368: INFO: namespace webhook-3274 deletion completed in 8.194388969s
STEP: Destroying namespace "webhook-3274-markers" for this suite.
Dec 17 22:45:59.411: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:45:59.562: INFO: namespace webhook-3274-markers deletion completed in 6.193533242s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:36.754 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to deny pod and configmap creation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Job 
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:45:59.589: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: delete a job
STEP: deleting Job.batch foo in namespace job-5983, will wait for the garbage collector to delete the pods
Dec 17 22:46:11.826: INFO: Deleting Job.batch foo took: 16.278497ms
Dec 17 22:46:11.927: INFO: Terminating Job.batch foo pods took: 100.882266ms
STEP: Ensuring job was deleted
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:46:56.933: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-5983" for this suite.
Dec 17 22:47:02.979: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:47:03.111: INFO: namespace job-5983 deletion completed in 6.173183218s

• [SLOW TEST:63.523 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should delete a job [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[sig-node] Downward API 
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:47:03.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward api env vars
Dec 17 22:47:03.241: INFO: Waiting up to 5m0s for pod "downward-api-a8ad1045-32f4-460f-8d94-906c8131a0dc" in namespace "downward-api-759" to be "success or failure"
Dec 17 22:47:03.346: INFO: Pod "downward-api-a8ad1045-32f4-460f-8d94-906c8131a0dc": Phase="Pending", Reason="", readiness=false. Elapsed: 103.977075ms
Dec 17 22:47:05.356: INFO: Pod "downward-api-a8ad1045-32f4-460f-8d94-906c8131a0dc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114498095s
Dec 17 22:47:07.366: INFO: Pod "downward-api-a8ad1045-32f4-460f-8d94-906c8131a0dc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.124462799s
Dec 17 22:47:09.375: INFO: Pod "downward-api-a8ad1045-32f4-460f-8d94-906c8131a0dc": Phase="Pending", Reason="", readiness=false. Elapsed: 6.132953014s
Dec 17 22:47:11.385: INFO: Pod "downward-api-a8ad1045-32f4-460f-8d94-906c8131a0dc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.143778562s
STEP: Saw pod success
Dec 17 22:47:11.386: INFO: Pod "downward-api-a8ad1045-32f4-460f-8d94-906c8131a0dc" satisfied condition "success or failure"
Dec 17 22:47:11.389: INFO: Trying to get logs from node jerma-node pod downward-api-a8ad1045-32f4-460f-8d94-906c8131a0dc container dapi-container: 
STEP: delete the pod
Dec 17 22:47:11.460: INFO: Waiting for pod downward-api-a8ad1045-32f4-460f-8d94-906c8131a0dc to disappear
Dec 17 22:47:11.480: INFO: Pod downward-api-a8ad1045-32f4-460f-8d94-906c8131a0dc no longer exists
[AfterEach] [sig-node] Downward API
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:47:11.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-759" for this suite.
Dec 17 22:47:17.624: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:47:17.813: INFO: namespace downward-api-759 deletion completed in 6.328069228s

• [SLOW TEST:14.702 seconds]
[sig-node] Downward API
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:32
  should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook 
  should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:47:17.814: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-lifecycle-hook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64
STEP: create the container to handle the HTTPGet hook request.
[It] should execute prestop exec hook properly [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the pod with lifecycle hook
STEP: delete the pod with lifecycle hook
Dec 17 22:47:34.208: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 22:47:34.247: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 22:47:36.248: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 22:47:36.262: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 22:47:38.249: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 22:47:38.262: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 22:47:40.248: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 22:47:40.263: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 22:47:42.248: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 22:47:42.259: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 22:47:44.248: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 22:47:44.256: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 22:47:46.248: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 22:47:46.259: INFO: Pod pod-with-prestop-exec-hook still exists
Dec 17 22:47:48.248: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear
Dec 17 22:47:48.259: INFO: Pod pod-with-prestop-exec-hook no longer exists
STEP: check prestop hook
[AfterEach] [k8s.io] Container Lifecycle Hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:47:48.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-lifecycle-hook-8388" for this suite.
Dec 17 22:48:00.365: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:48:00.525: INFO: namespace container-lifecycle-hook-8388 deletion completed in 12.22635917s

• [SLOW TEST:42.711 seconds]
[k8s.io] Container Lifecycle Hook
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  when create a pod with lifecycle hook
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42
    should execute prestop exec hook properly [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:48:00.529: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
[It] should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:48:00.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-3858" for this suite.
Dec 17 22:48:06.714: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:48:06.799: INFO: namespace services-3858 deletion completed in 6.126658588s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95

• [SLOW TEST:6.271 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide secure master service  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:48:06.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating projection with secret that has name projected-secret-test-map-798e33c1-896a-46aa-a2c4-d6cb9003a822
STEP: Creating a pod to test consume secrets
Dec 17 22:48:06.929: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-e856bcb9-93d2-4197-b24f-3169d5417837" in namespace "projected-5787" to be "success or failure"
Dec 17 22:48:06.952: INFO: Pod "pod-projected-secrets-e856bcb9-93d2-4197-b24f-3169d5417837": Phase="Pending", Reason="", readiness=false. Elapsed: 22.488713ms
Dec 17 22:48:08.962: INFO: Pod "pod-projected-secrets-e856bcb9-93d2-4197-b24f-3169d5417837": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032306136s
Dec 17 22:48:10.972: INFO: Pod "pod-projected-secrets-e856bcb9-93d2-4197-b24f-3169d5417837": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042685105s
Dec 17 22:48:12.980: INFO: Pod "pod-projected-secrets-e856bcb9-93d2-4197-b24f-3169d5417837": Phase="Pending", Reason="", readiness=false. Elapsed: 6.050317107s
Dec 17 22:48:14.990: INFO: Pod "pod-projected-secrets-e856bcb9-93d2-4197-b24f-3169d5417837": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061058996s
STEP: Saw pod success
Dec 17 22:48:14.991: INFO: Pod "pod-projected-secrets-e856bcb9-93d2-4197-b24f-3169d5417837" satisfied condition "success or failure"
Dec 17 22:48:14.995: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-e856bcb9-93d2-4197-b24f-3169d5417837 container projected-secret-volume-test: 
STEP: delete the pod
Dec 17 22:48:15.048: INFO: Waiting for pod pod-projected-secrets-e856bcb9-93d2-4197-b24f-3169d5417837 to disappear
Dec 17 22:48:15.053: INFO: Pod pod-projected-secrets-e856bcb9-93d2-4197-b24f-3169d5417837 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:48:15.053: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5787" for this suite.
Dec 17 22:48:21.083: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:48:21.208: INFO: namespace projected-5787 deletion completed in 6.149261951s

• [SLOW TEST:14.409 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir wrapper volumes 
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:48:21.210: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir-wrapper
STEP: Waiting for a default service account to be provisioned in namespace
[It] should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Cleaning up the secret
STEP: Cleaning up the configmap
STEP: Cleaning up the pod
[AfterEach] [sig-storage] EmptyDir wrapper volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:48:29.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-wrapper-1888" for this suite.
Dec 17 22:48:35.558: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:48:35.697: INFO: namespace emptydir-wrapper-1888 deletion completed in 6.181559273s

• [SLOW TEST:14.488 seconds]
[sig-storage] EmptyDir wrapper volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  should not conflict [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] 
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:48:35.698: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: fetching the /apis discovery document
STEP: finding the apiextensions.k8s.io API group in the /apis discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document
STEP: fetching the /apis/apiextensions.k8s.io discovery document
STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document
STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document
STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:48:35.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2563" for this suite.
Dec 17 22:48:41.866: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:48:42.060: INFO: namespace custom-resource-definition-2563 deletion completed in 6.227268415s

• [SLOW TEST:6.362 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should include custom resource definition resources in discovery documents [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:48:42.062: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: set up a multi version CRD
Dec 17 22:48:42.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: rename a version
STEP: check the new version name is served
STEP: check the old version name is removed
STEP: check the other version is not changed
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:49:04.644: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-7561" for this suite.
Dec 17 22:49:10.687: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:49:10.841: INFO: namespace crd-publish-openapi-7561 deletion completed in 6.185803937s

• [SLOW TEST:28.779 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  updates the published spec when one version gets renamed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:49:10.842: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 17 22:49:11.623: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 17 22:49:13.662: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219751, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219751, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219751, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219751, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:49:15.670: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219751, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219751, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219751, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219751, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:49:17.671: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219751, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219751, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219751, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712219751, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 17 22:49:20.832: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 22:49:20.839: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7939-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:49:22.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9158" for this suite.
Dec 17 22:49:28.133: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:49:28.268: INFO: namespace webhook-9158 deletion completed in 6.187223367s
STEP: Destroying namespace "webhook-9158-markers" for this suite.
Dec 17 22:49:34.397: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:49:34.523: INFO: namespace webhook-9158-markers deletion completed in 6.254616795s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:23.704 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:49:34.548: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0666 on tmpfs
Dec 17 22:49:34.655: INFO: Waiting up to 5m0s for pod "pod-2cf46f06-90c0-4c5b-b39d-76fc3036a61d" in namespace "emptydir-8170" to be "success or failure"
Dec 17 22:49:34.667: INFO: Pod "pod-2cf46f06-90c0-4c5b-b39d-76fc3036a61d": Phase="Pending", Reason="", readiness=false. Elapsed: 12.129992ms
Dec 17 22:49:36.679: INFO: Pod "pod-2cf46f06-90c0-4c5b-b39d-76fc3036a61d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023812398s
Dec 17 22:49:38.688: INFO: Pod "pod-2cf46f06-90c0-4c5b-b39d-76fc3036a61d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032158383s
Dec 17 22:49:40.695: INFO: Pod "pod-2cf46f06-90c0-4c5b-b39d-76fc3036a61d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039386072s
Dec 17 22:49:42.703: INFO: Pod "pod-2cf46f06-90c0-4c5b-b39d-76fc3036a61d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.04758246s
STEP: Saw pod success
Dec 17 22:49:42.703: INFO: Pod "pod-2cf46f06-90c0-4c5b-b39d-76fc3036a61d" satisfied condition "success or failure"
Dec 17 22:49:42.707: INFO: Trying to get logs from node jerma-node pod pod-2cf46f06-90c0-4c5b-b39d-76fc3036a61d container test-container: 
STEP: delete the pod
Dec 17 22:49:42.824: INFO: Waiting for pod pod-2cf46f06-90c0-4c5b-b39d-76fc3036a61d to disappear
Dec 17 22:49:42.835: INFO: Pod pod-2cf46f06-90c0-4c5b-b39d-76fc3036a61d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:49:42.836: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-8170" for this suite.
Dec 17 22:49:48.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:49:49.000: INFO: namespace emptydir-8170 deletion completed in 6.14958956s

• [SLOW TEST:14.453 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:49:49.001: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0644 on node default medium
Dec 17 22:49:49.053: INFO: Waiting up to 5m0s for pod "pod-301160b1-abd5-4ac6-bd44-c3cb8e2bdf3e" in namespace "emptydir-6231" to be "success or failure"
Dec 17 22:49:49.122: INFO: Pod "pod-301160b1-abd5-4ac6-bd44-c3cb8e2bdf3e": Phase="Pending", Reason="", readiness=false. Elapsed: 68.626351ms
Dec 17 22:49:51.133: INFO: Pod "pod-301160b1-abd5-4ac6-bd44-c3cb8e2bdf3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.080031087s
Dec 17 22:49:53.143: INFO: Pod "pod-301160b1-abd5-4ac6-bd44-c3cb8e2bdf3e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.089729921s
Dec 17 22:49:55.194: INFO: Pod "pod-301160b1-abd5-4ac6-bd44-c3cb8e2bdf3e": Phase="Pending", Reason="", readiness=false. Elapsed: 6.141331801s
Dec 17 22:49:57.206: INFO: Pod "pod-301160b1-abd5-4ac6-bd44-c3cb8e2bdf3e": Phase="Pending", Reason="", readiness=false. Elapsed: 8.152752143s
Dec 17 22:49:59.214: INFO: Pod "pod-301160b1-abd5-4ac6-bd44-c3cb8e2bdf3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.16085455s
STEP: Saw pod success
Dec 17 22:49:59.214: INFO: Pod "pod-301160b1-abd5-4ac6-bd44-c3cb8e2bdf3e" satisfied condition "success or failure"
Dec 17 22:49:59.218: INFO: Trying to get logs from node jerma-node pod pod-301160b1-abd5-4ac6-bd44-c3cb8e2bdf3e container test-container: 
STEP: delete the pod
Dec 17 22:49:59.722: INFO: Waiting for pod pod-301160b1-abd5-4ac6-bd44-c3cb8e2bdf3e to disappear
Dec 17 22:49:59.767: INFO: Pod pod-301160b1-abd5-4ac6-bd44-c3cb8e2bdf3e no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:49:59.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6231" for this suite.
Dec 17 22:50:05.798: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:50:05.987: INFO: namespace emptydir-6231 deletion completed in 6.214617949s

• [SLOW TEST:16.986 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[sig-network] DNS 
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:50:05.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6520.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-6520.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6520.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-6520.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6520.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-6520.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6520.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-6520.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6520.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-6520.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6520.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-6520.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6520.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 209.99.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.99.209_udp@PTR;check="$$(dig +tcp +noall +answer +search 209.99.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.99.209_tcp@PTR;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-6520.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-6520.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-6520.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-6520.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-6520.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-6520.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-6520.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-6520.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-6520.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-6520.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-6520.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-6520.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-6520.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 209.99.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.99.209_udp@PTR;check="$$(dig +tcp +noall +answer +search 209.99.98.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.98.99.209_tcp@PTR;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 17 22:50:16.312: INFO: Unable to read wheezy_udp@dns-test-service.dns-6520.svc.cluster.local from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.322: INFO: Unable to read wheezy_tcp@dns-test-service.dns-6520.svc.cluster.local from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.329: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-6520.svc.cluster.local from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.336: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-6520.svc.cluster.local from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.344: INFO: Unable to read wheezy_udp@_http._tcp.test-service-2.dns-6520.svc.cluster.local from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.351: INFO: Unable to read wheezy_tcp@_http._tcp.test-service-2.dns-6520.svc.cluster.local from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.356: INFO: Unable to read wheezy_udp@PodARecord from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.362: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.368: INFO: Unable to read 10.98.99.209_udp@PTR from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.373: INFO: Unable to read 10.98.99.209_tcp@PTR from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.376: INFO: Unable to read jessie_udp@dns-test-service.dns-6520.svc.cluster.local from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.380: INFO: Unable to read jessie_tcp@dns-test-service.dns-6520.svc.cluster.local from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.383: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-6520.svc.cluster.local from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.388: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-6520.svc.cluster.local from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.449: INFO: Unable to read jessie_udp@_http._tcp.test-service-2.dns-6520.svc.cluster.local from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.459: INFO: Unable to read jessie_tcp@_http._tcp.test-service-2.dns-6520.svc.cluster.local from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.467: INFO: Unable to read jessie_udp@PodARecord from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.471: INFO: Unable to read jessie_tcp@PodARecord from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.475: INFO: Unable to read 10.98.99.209_udp@PTR from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.486: INFO: Unable to read 10.98.99.209_tcp@PTR from pod dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb: the server could not find the requested resource (get pods dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb)
Dec 17 22:50:16.486: INFO: Lookups using dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb failed for: [wheezy_udp@dns-test-service.dns-6520.svc.cluster.local wheezy_tcp@dns-test-service.dns-6520.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-6520.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-6520.svc.cluster.local wheezy_udp@_http._tcp.test-service-2.dns-6520.svc.cluster.local wheezy_tcp@_http._tcp.test-service-2.dns-6520.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord 10.98.99.209_udp@PTR 10.98.99.209_tcp@PTR jessie_udp@dns-test-service.dns-6520.svc.cluster.local jessie_tcp@dns-test-service.dns-6520.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-6520.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-6520.svc.cluster.local jessie_udp@_http._tcp.test-service-2.dns-6520.svc.cluster.local jessie_tcp@_http._tcp.test-service-2.dns-6520.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord 10.98.99.209_udp@PTR 10.98.99.209_tcp@PTR]

Dec 17 22:50:21.600: INFO: DNS probes using dns-6520/dns-test-14e21f51-b16e-4f3f-9bcc-e0a8849e97cb succeeded

STEP: deleting the pod
STEP: deleting the test service
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:50:22.055: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-6520" for this suite.
Dec 17 22:50:28.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:50:28.331: INFO: namespace dns-6520 deletion completed in 6.252036146s

• [SLOW TEST:22.343 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for services  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[sig-node] ConfigMap 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:50:28.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap configmap-3980/configmap-test-91e104aa-cca0-458d-adbe-05262a48dcb4
STEP: Creating a pod to test consume configMaps
Dec 17 22:50:28.436: INFO: Waiting up to 5m0s for pod "pod-configmaps-fd50da08-624a-4d75-ac23-0bf9d4cb68e8" in namespace "configmap-3980" to be "success or failure"
Dec 17 22:50:28.441: INFO: Pod "pod-configmaps-fd50da08-624a-4d75-ac23-0bf9d4cb68e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.462291ms
Dec 17 22:50:30.452: INFO: Pod "pod-configmaps-fd50da08-624a-4d75-ac23-0bf9d4cb68e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015340565s
Dec 17 22:50:32.517: INFO: Pod "pod-configmaps-fd50da08-624a-4d75-ac23-0bf9d4cb68e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.079987231s
Dec 17 22:50:34.552: INFO: Pod "pod-configmaps-fd50da08-624a-4d75-ac23-0bf9d4cb68e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.115326813s
Dec 17 22:50:36.653: INFO: Pod "pod-configmaps-fd50da08-624a-4d75-ac23-0bf9d4cb68e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.216463108s
STEP: Saw pod success
Dec 17 22:50:36.653: INFO: Pod "pod-configmaps-fd50da08-624a-4d75-ac23-0bf9d4cb68e8" satisfied condition "success or failure"
Dec 17 22:50:36.666: INFO: Trying to get logs from node jerma-node pod pod-configmaps-fd50da08-624a-4d75-ac23-0bf9d4cb68e8 container env-test: 
STEP: delete the pod
Dec 17 22:50:36.723: INFO: Waiting for pod pod-configmaps-fd50da08-624a-4d75-ac23-0bf9d4cb68e8 to disappear
Dec 17 22:50:36.736: INFO: Pod pod-configmaps-fd50da08-624a-4d75-ac23-0bf9d4cb68e8 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:50:36.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-3980" for this suite.
Dec 17 22:50:42.838: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:50:43.050: INFO: namespace configmap-3980 deletion completed in 6.308923905s

• [SLOW TEST:14.719 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:50:43.052: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 17 22:50:43.126: INFO: Waiting up to 5m0s for pod "pod-05748f0a-d95e-4965-a18d-d986ecb320e8" in namespace "emptydir-7046" to be "success or failure"
Dec 17 22:50:43.138: INFO: Pod "pod-05748f0a-d95e-4965-a18d-d986ecb320e8": Phase="Pending", Reason="", readiness=false. Elapsed: 11.256841ms
Dec 17 22:50:45.163: INFO: Pod "pod-05748f0a-d95e-4965-a18d-d986ecb320e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036763929s
Dec 17 22:50:47.176: INFO: Pod "pod-05748f0a-d95e-4965-a18d-d986ecb320e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049955793s
Dec 17 22:50:49.185: INFO: Pod "pod-05748f0a-d95e-4965-a18d-d986ecb320e8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058568628s
Dec 17 22:50:51.192: INFO: Pod "pod-05748f0a-d95e-4965-a18d-d986ecb320e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.066170825s
STEP: Saw pod success
Dec 17 22:50:51.193: INFO: Pod "pod-05748f0a-d95e-4965-a18d-d986ecb320e8" satisfied condition "success or failure"
Dec 17 22:50:51.195: INFO: Trying to get logs from node jerma-node pod pod-05748f0a-d95e-4965-a18d-d986ecb320e8 container test-container: 
STEP: delete the pod
Dec 17 22:50:51.331: INFO: Waiting for pod pod-05748f0a-d95e-4965-a18d-d986ecb320e8 to disappear
Dec 17 22:50:51.339: INFO: Pod pod-05748f0a-d95e-4965-a18d-d986ecb320e8 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:50:51.339: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-7046" for this suite.
Dec 17 22:50:57.375: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:50:57.508: INFO: namespace emptydir-7046 deletion completed in 6.163909933s

• [SLOW TEST:14.456 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:50:57.509: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the pod
Dec 17 22:50:57.643: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:51:13.200: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-2810" for this suite.
Dec 17 22:51:25.275: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:51:25.436: INFO: namespace init-container-2810 deletion completed in 12.200432513s

• [SLOW TEST:27.927 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should invoke init containers on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:51:25.437: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation
Dec 17 22:51:25.574: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 22:51:29.506: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:51:43.385: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-8932" for this suite.
Dec 17 22:51:49.418: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:51:49.551: INFO: namespace crd-publish-openapi-8932 deletion completed in 6.16120229s

• [SLOW TEST:24.114 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of different groups [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:51:49.552: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test override command
Dec 17 22:51:49.642: INFO: Waiting up to 5m0s for pod "client-containers-003c4fb1-63dd-41a7-8fc5-6e778796b07f" in namespace "containers-8853" to be "success or failure"
Dec 17 22:51:49.689: INFO: Pod "client-containers-003c4fb1-63dd-41a7-8fc5-6e778796b07f": Phase="Pending", Reason="", readiness=false. Elapsed: 46.350602ms
Dec 17 22:51:51.697: INFO: Pod "client-containers-003c4fb1-63dd-41a7-8fc5-6e778796b07f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.054237862s
Dec 17 22:51:53.704: INFO: Pod "client-containers-003c4fb1-63dd-41a7-8fc5-6e778796b07f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061480511s
Dec 17 22:51:55.713: INFO: Pod "client-containers-003c4fb1-63dd-41a7-8fc5-6e778796b07f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.070318689s
Dec 17 22:51:57.766: INFO: Pod "client-containers-003c4fb1-63dd-41a7-8fc5-6e778796b07f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.123382939s
Dec 17 22:51:59.775: INFO: Pod "client-containers-003c4fb1-63dd-41a7-8fc5-6e778796b07f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.132188172s
STEP: Saw pod success
Dec 17 22:51:59.775: INFO: Pod "client-containers-003c4fb1-63dd-41a7-8fc5-6e778796b07f" satisfied condition "success or failure"
Dec 17 22:51:59.782: INFO: Trying to get logs from node jerma-node pod client-containers-003c4fb1-63dd-41a7-8fc5-6e778796b07f container test-container: 
STEP: delete the pod
Dec 17 22:51:59.920: INFO: Waiting for pod client-containers-003c4fb1-63dd-41a7-8fc5-6e778796b07f to disappear
Dec 17 22:51:59.924: INFO: Pod client-containers-003c4fb1-63dd-41a7-8fc5-6e778796b07f no longer exists
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:51:59.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-8853" for this suite.
Dec 17 22:52:06.002: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:52:06.152: INFO: namespace containers-8853 deletion completed in 6.220848096s

• [SLOW TEST:16.600 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Proxy version v1 
  should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:52:06.154: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy logs on node using proxy subresource  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 22:52:06.271: INFO: (0) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 9.235168ms)
Dec 17 22:52:06.276: INFO: (1) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.411114ms)
Dec 17 22:52:06.281: INFO: (2) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.336037ms)
Dec 17 22:52:06.285: INFO: (3) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.460008ms)
Dec 17 22:52:06.288: INFO: (4) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.332835ms)
Dec 17 22:52:06.291: INFO: (5) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.258802ms)
Dec 17 22:52:06.296: INFO: (6) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.784047ms)
Dec 17 22:52:06.300: INFO: (7) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 4.303653ms)
Dec 17 22:52:06.304: INFO: (8) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.476756ms)
Dec 17 22:52:06.307: INFO: (9) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.080738ms)
Dec 17 22:52:06.311: INFO: (10) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.749718ms)
Dec 17 22:52:06.315: INFO: (11) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.715191ms)
Dec 17 22:52:06.318: INFO: (12) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.731014ms)
Dec 17 22:52:06.322: INFO: (13) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.415136ms)
Dec 17 22:52:06.325: INFO: (14) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.888104ms)
Dec 17 22:52:06.330: INFO: (15) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 5.128622ms)
Dec 17 22:52:06.334: INFO: (16) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.919635ms)
Dec 17 22:52:06.337: INFO: (17) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 2.77783ms)
Dec 17 22:52:06.340: INFO: (18) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.048137ms)
Dec 17 22:52:06.343: INFO: (19) /api/v1/nodes/jerma-node/proxy/logs/: 
alternatives.log
alternatives.l... (200; 3.047582ms)
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:52:06.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-6337" for this suite.
Dec 17 22:52:12.391: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:52:12.523: INFO: namespace proxy-6337 deletion completed in 6.176916549s

• [SLOW TEST:6.369 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy logs on node using proxy subresource  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:52:12.526: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a job
STEP: Ensuring job reaches completions
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:52:44.656: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-9668" for this suite.
Dec 17 22:52:50.689: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:52:50.813: INFO: namespace job-9668 deletion completed in 6.151759873s

• [SLOW TEST:38.288 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Garbage collector 
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:52:50.815: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename gc
STEP: Waiting for a default service account to be provisioned in namespace
[It] should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: create the rc
STEP: delete the rc
STEP: wait for all pods to be garbage collected
STEP: Gathering metrics
W1217 22:53:01.017756       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 17 22:53:01.017: INFO: For apiserver_request_total:
For apiserver_request_latencies_summary:
For apiserver_init_events_total:
For garbage_collector_attempt_to_delete_queue_latency:
For garbage_collector_attempt_to_delete_work_duration:
For garbage_collector_attempt_to_orphan_queue_latency:
For garbage_collector_attempt_to_orphan_work_duration:
For garbage_collector_dirty_processing_latency_microseconds:
For garbage_collector_event_processing_latency_microseconds:
For garbage_collector_graph_changes_queue_latency:
For garbage_collector_graph_changes_work_duration:
For garbage_collector_orphan_processing_latency_microseconds:
For namespace_queue_latency:
For namespace_queue_latency_sum:
For namespace_queue_latency_count:
For namespace_retries:
For namespace_work_duration:
For namespace_work_duration_sum:
For namespace_work_duration_count:
For function_duration_seconds:
For errors_total:
For evicted_pods_total:

[AfterEach] [sig-api-machinery] Garbage collector
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:53:01.018: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "gc-4307" for this suite.
Dec 17 22:53:07.057: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:53:07.186: INFO: namespace gc-4307 deletion completed in 6.165565446s

• [SLOW TEST:16.372 seconds]
[sig-api-machinery] Garbage collector
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should delete pods created by rc when not orphaning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-auth] ServiceAccounts 
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:53:07.188: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename svcaccounts
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: getting the auto-created API token
Dec 17 22:53:07.875: INFO: created pod pod-service-account-defaultsa
Dec 17 22:53:07.876: INFO: pod pod-service-account-defaultsa service account token volume mount: true
Dec 17 22:53:07.912: INFO: created pod pod-service-account-mountsa
Dec 17 22:53:07.912: INFO: pod pod-service-account-mountsa service account token volume mount: true
Dec 17 22:53:07.929: INFO: created pod pod-service-account-nomountsa
Dec 17 22:53:07.929: INFO: pod pod-service-account-nomountsa service account token volume mount: false
Dec 17 22:53:08.025: INFO: created pod pod-service-account-defaultsa-mountspec
Dec 17 22:53:08.026: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true
Dec 17 22:53:08.053: INFO: created pod pod-service-account-mountsa-mountspec
Dec 17 22:53:08.053: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true
Dec 17 22:53:08.196: INFO: created pod pod-service-account-nomountsa-mountspec
Dec 17 22:53:08.196: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true
Dec 17 22:53:09.128: INFO: created pod pod-service-account-defaultsa-nomountspec
Dec 17 22:53:09.129: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false
Dec 17 22:53:09.669: INFO: created pod pod-service-account-mountsa-nomountspec
Dec 17 22:53:09.670: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false
Dec 17 22:53:09.721: INFO: created pod pod-service-account-nomountsa-nomountspec
Dec 17 22:53:09.722: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false
[AfterEach] [sig-auth] ServiceAccounts
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:53:09.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "svcaccounts-6470" for this suite.
Dec 17 22:53:52.788: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:53:52.923: INFO: namespace svcaccounts-6470 deletion completed in 42.296122519s

• [SLOW TEST:45.736 seconds]
[sig-auth] ServiceAccounts
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23
  should allow opting out of API token automount  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:53:52.924: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 17 22:53:53.533: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 17 22:53:55.560: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220033, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220033, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220033, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220033, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:53:57.585: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220033, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220033, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220033, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220033, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:53:59.578: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220033, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220033, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220033, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220033, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:54:01.622: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220033, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220033, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220033, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220033, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 17 22:54:04.645: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 22:54:04.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6604-crds.webhook.example.com via the AdmissionRegistration API
STEP: Creating a custom resource that should be mutated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:54:05.879: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6307" for this suite.
Dec 17 22:54:12.019: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:54:12.152: INFO: namespace webhook-6307 deletion completed in 6.219970642s
STEP: Destroying namespace "webhook-6307-markers" for this suite.
Dec 17 22:54:18.173: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:54:18.309: INFO: namespace webhook-6307-markers deletion completed in 6.156920033s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:25.409 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate custom resource with pruning [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:54:18.338: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2494.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-2494.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2494.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-2494.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-2494.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-2494.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe /etc/hosts
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 17 22:54:30.537: INFO: Unable to read wheezy_udp@PodARecord from pod dns-2494/dns-test-f224f487-f29d-4bc0-b449-b27435b5da0f: the server could not find the requested resource (get pods dns-test-f224f487-f29d-4bc0-b449-b27435b5da0f)
Dec 17 22:54:30.544: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-2494/dns-test-f224f487-f29d-4bc0-b449-b27435b5da0f: the server could not find the requested resource (get pods dns-test-f224f487-f29d-4bc0-b449-b27435b5da0f)
Dec 17 22:54:30.550: INFO: Unable to read jessie_hosts@dns-querier-1.dns-test-service.dns-2494.svc.cluster.local from pod dns-2494/dns-test-f224f487-f29d-4bc0-b449-b27435b5da0f: the server could not find the requested resource (get pods dns-test-f224f487-f29d-4bc0-b449-b27435b5da0f)
Dec 17 22:54:30.555: INFO: Unable to read jessie_hosts@dns-querier-1 from pod dns-2494/dns-test-f224f487-f29d-4bc0-b449-b27435b5da0f: the server could not find the requested resource (get pods dns-test-f224f487-f29d-4bc0-b449-b27435b5da0f)
Dec 17 22:54:30.558: INFO: Unable to read jessie_udp@PodARecord from pod dns-2494/dns-test-f224f487-f29d-4bc0-b449-b27435b5da0f: the server could not find the requested resource (get pods dns-test-f224f487-f29d-4bc0-b449-b27435b5da0f)
Dec 17 22:54:30.563: INFO: Unable to read jessie_tcp@PodARecord from pod dns-2494/dns-test-f224f487-f29d-4bc0-b449-b27435b5da0f: the server could not find the requested resource (get pods dns-test-f224f487-f29d-4bc0-b449-b27435b5da0f)
Dec 17 22:54:30.563: INFO: Lookups using dns-2494/dns-test-f224f487-f29d-4bc0-b449-b27435b5da0f failed for: [wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_hosts@dns-querier-1.dns-test-service.dns-2494.svc.cluster.local jessie_hosts@dns-querier-1 jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 17 22:54:35.615: INFO: DNS probes using dns-2494/dns-test-f224f487-f29d-4bc0-b449-b27435b5da0f succeeded

STEP: deleting the pod
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:54:35.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-2494" for this suite.
Dec 17 22:54:41.763: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:54:41.910: INFO: namespace dns-2494 deletion completed in 6.217550923s

• [SLOW TEST:23.572 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:54:41.914: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 22:54:42.084: INFO: Waiting up to 5m0s for pod "downwardapi-volume-75572df5-71f3-4f33-b5e3-b802d1b78723" in namespace "projected-5486" to be "success or failure"
Dec 17 22:54:42.096: INFO: Pod "downwardapi-volume-75572df5-71f3-4f33-b5e3-b802d1b78723": Phase="Pending", Reason="", readiness=false. Elapsed: 11.674789ms
Dec 17 22:54:44.110: INFO: Pod "downwardapi-volume-75572df5-71f3-4f33-b5e3-b802d1b78723": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02608677s
Dec 17 22:54:46.160: INFO: Pod "downwardapi-volume-75572df5-71f3-4f33-b5e3-b802d1b78723": Phase="Pending", Reason="", readiness=false. Elapsed: 4.075991658s
Dec 17 22:54:48.168: INFO: Pod "downwardapi-volume-75572df5-71f3-4f33-b5e3-b802d1b78723": Phase="Pending", Reason="", readiness=false. Elapsed: 6.083730815s
Dec 17 22:54:50.187: INFO: Pod "downwardapi-volume-75572df5-71f3-4f33-b5e3-b802d1b78723": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102533066s
STEP: Saw pod success
Dec 17 22:54:50.187: INFO: Pod "downwardapi-volume-75572df5-71f3-4f33-b5e3-b802d1b78723" satisfied condition "success or failure"
Dec 17 22:54:50.195: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-75572df5-71f3-4f33-b5e3-b802d1b78723 container client-container: 
STEP: delete the pod
Dec 17 22:54:50.251: INFO: Waiting for pod downwardapi-volume-75572df5-71f3-4f33-b5e3-b802d1b78723 to disappear
Dec 17 22:54:50.283: INFO: Pod downwardapi-volume-75572df5-71f3-4f33-b5e3-b802d1b78723 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:54:50.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5486" for this suite.
Dec 17 22:54:56.388: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:54:56.512: INFO: namespace projected-5486 deletion completed in 6.223463784s

• [SLOW TEST:14.598 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:54:56.514: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0644 on tmpfs
Dec 17 22:54:56.638: INFO: Waiting up to 5m0s for pod "pod-74b83def-7da7-4ebd-a16d-50c6ef255a1c" in namespace "emptydir-6292" to be "success or failure"
Dec 17 22:54:56.665: INFO: Pod "pod-74b83def-7da7-4ebd-a16d-50c6ef255a1c": Phase="Pending", Reason="", readiness=false. Elapsed: 26.741015ms
Dec 17 22:54:58.675: INFO: Pod "pod-74b83def-7da7-4ebd-a16d-50c6ef255a1c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036787394s
Dec 17 22:55:00.688: INFO: Pod "pod-74b83def-7da7-4ebd-a16d-50c6ef255a1c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.049178305s
Dec 17 22:55:02.702: INFO: Pod "pod-74b83def-7da7-4ebd-a16d-50c6ef255a1c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.063079826s
Dec 17 22:55:04.741: INFO: Pod "pod-74b83def-7da7-4ebd-a16d-50c6ef255a1c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.102738599s
STEP: Saw pod success
Dec 17 22:55:04.742: INFO: Pod "pod-74b83def-7da7-4ebd-a16d-50c6ef255a1c" satisfied condition "success or failure"
Dec 17 22:55:04.754: INFO: Trying to get logs from node jerma-node pod pod-74b83def-7da7-4ebd-a16d-50c6ef255a1c container test-container: 
STEP: delete the pod
Dec 17 22:55:04.855: INFO: Waiting for pod pod-74b83def-7da7-4ebd-a16d-50c6ef255a1c to disappear
Dec 17 22:55:04.874: INFO: Pod pod-74b83def-7da7-4ebd-a16d-50c6ef255a1c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:55:04.875: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6292" for this suite.
Dec 17 22:55:11.104: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:55:11.208: INFO: namespace emptydir-6292 deletion completed in 6.291935678s

• [SLOW TEST:14.694 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:55:11.212: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 17 22:55:12.131: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 17 22:55:14.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220112, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220112, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220112, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220112, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:55:16.164: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220112, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220112, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220112, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220112, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:55:18.155: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220112, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220112, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220112, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220112, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 17 22:55:21.178: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a mutating webhook configuration
STEP: Updating a mutating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that should not be mutated
STEP: Patching a mutating webhook configuration's rules to include the create operation
STEP: Creating a configMap that should be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:55:21.381: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-6086" for this suite.
Dec 17 22:55:27.437: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:55:27.578: INFO: namespace webhook-6086 deletion completed in 6.178069371s
STEP: Destroying namespace "webhook-6086-markers" for this suite.
Dec 17 22:55:33.693: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:55:33.868: INFO: namespace webhook-6086-markers deletion completed in 6.290012899s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:22.684 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a mutating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Subpath Atomic writer volumes 
  should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:55:33.898: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename subpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37
STEP: Setting up data
[It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating pod pod-subpath-test-configmap-7r25
STEP: Creating a pod to test atomic-volume-subpath
Dec 17 22:55:34.180: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-7r25" in namespace "subpath-7893" to be "success or failure"
Dec 17 22:55:34.205: INFO: Pod "pod-subpath-test-configmap-7r25": Phase="Pending", Reason="", readiness=false. Elapsed: 24.053566ms
Dec 17 22:55:36.220: INFO: Pod "pod-subpath-test-configmap-7r25": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038944314s
Dec 17 22:55:38.232: INFO: Pod "pod-subpath-test-configmap-7r25": Phase="Pending", Reason="", readiness=false. Elapsed: 4.050673872s
Dec 17 22:55:40.251: INFO: Pod "pod-subpath-test-configmap-7r25": Phase="Pending", Reason="", readiness=false. Elapsed: 6.07024845s
Dec 17 22:55:42.258: INFO: Pod "pod-subpath-test-configmap-7r25": Phase="Running", Reason="", readiness=true. Elapsed: 8.077353635s
Dec 17 22:55:44.283: INFO: Pod "pod-subpath-test-configmap-7r25": Phase="Running", Reason="", readiness=true. Elapsed: 10.10203841s
Dec 17 22:55:46.293: INFO: Pod "pod-subpath-test-configmap-7r25": Phase="Running", Reason="", readiness=true. Elapsed: 12.112377996s
Dec 17 22:55:48.303: INFO: Pod "pod-subpath-test-configmap-7r25": Phase="Running", Reason="", readiness=true. Elapsed: 14.122133792s
Dec 17 22:55:50.316: INFO: Pod "pod-subpath-test-configmap-7r25": Phase="Running", Reason="", readiness=true. Elapsed: 16.134748279s
Dec 17 22:55:52.324: INFO: Pod "pod-subpath-test-configmap-7r25": Phase="Running", Reason="", readiness=true. Elapsed: 18.142556013s
Dec 17 22:55:54.334: INFO: Pod "pod-subpath-test-configmap-7r25": Phase="Running", Reason="", readiness=true. Elapsed: 20.153081914s
Dec 17 22:55:56.346: INFO: Pod "pod-subpath-test-configmap-7r25": Phase="Running", Reason="", readiness=true. Elapsed: 22.164713384s
Dec 17 22:55:58.356: INFO: Pod "pod-subpath-test-configmap-7r25": Phase="Running", Reason="", readiness=true. Elapsed: 24.174860027s
Dec 17 22:56:00.375: INFO: Pod "pod-subpath-test-configmap-7r25": Phase="Running", Reason="", readiness=true. Elapsed: 26.193698883s
Dec 17 22:56:02.386: INFO: Pod "pod-subpath-test-configmap-7r25": Phase="Running", Reason="", readiness=true. Elapsed: 28.20487454s
Dec 17 22:56:04.400: INFO: Pod "pod-subpath-test-configmap-7r25": Phase="Succeeded", Reason="", readiness=false. Elapsed: 30.219024707s
STEP: Saw pod success
Dec 17 22:56:04.401: INFO: Pod "pod-subpath-test-configmap-7r25" satisfied condition "success or failure"
Dec 17 22:56:04.406: INFO: Trying to get logs from node jerma-node pod pod-subpath-test-configmap-7r25 container test-container-subpath-configmap-7r25: 
STEP: delete the pod
Dec 17 22:56:04.593: INFO: Waiting for pod pod-subpath-test-configmap-7r25 to disappear
Dec 17 22:56:04.601: INFO: Pod pod-subpath-test-configmap-7r25 no longer exists
STEP: Deleting pod pod-subpath-test-configmap-7r25
Dec 17 22:56:04.602: INFO: Deleting pod "pod-subpath-test-configmap-7r25" in namespace "subpath-7893"
[AfterEach] [sig-storage] Subpath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:56:04.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "subpath-7893" for this suite.
Dec 17 22:56:10.653: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:56:10.802: INFO: namespace subpath-7893 deletion completed in 6.190514836s

• [SLOW TEST:36.904 seconds]
[sig-storage] Subpath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23
  Atomic writer volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33
    should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:56:10.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating projection with secret that has name secret-emptykey-test-46fb2f4c-c234-4024-93cd-566e4308a141
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:56:10.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-1453" for this suite.
Dec 17 22:56:17.205: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:56:18.733: INFO: namespace secrets-1453 deletion completed in 7.792741347s

• [SLOW TEST:7.930 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should fail to create secret due to empty secret key [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[k8s.io] Variable Expansion 
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:56:18.733: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename var-expansion
STEP: Waiting for a default service account to be provisioned in namespace
[It] should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test substitution in container's command
Dec 17 22:56:18.858: INFO: Waiting up to 5m0s for pod "var-expansion-478bd4a8-2a7e-4eb0-ba91-1f29ae9c20fa" in namespace "var-expansion-451" to be "success or failure"
Dec 17 22:56:18.879: INFO: Pod "var-expansion-478bd4a8-2a7e-4eb0-ba91-1f29ae9c20fa": Phase="Pending", Reason="", readiness=false. Elapsed: 20.267602ms
Dec 17 22:56:20.889: INFO: Pod "var-expansion-478bd4a8-2a7e-4eb0-ba91-1f29ae9c20fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030330225s
Dec 17 22:56:22.903: INFO: Pod "var-expansion-478bd4a8-2a7e-4eb0-ba91-1f29ae9c20fa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044621291s
Dec 17 22:56:24.910: INFO: Pod "var-expansion-478bd4a8-2a7e-4eb0-ba91-1f29ae9c20fa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.051499854s
Dec 17 22:56:26.918: INFO: Pod "var-expansion-478bd4a8-2a7e-4eb0-ba91-1f29ae9c20fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.059361614s
STEP: Saw pod success
Dec 17 22:56:26.918: INFO: Pod "var-expansion-478bd4a8-2a7e-4eb0-ba91-1f29ae9c20fa" satisfied condition "success or failure"
Dec 17 22:56:26.921: INFO: Trying to get logs from node jerma-node pod var-expansion-478bd4a8-2a7e-4eb0-ba91-1f29ae9c20fa container dapi-container: 
STEP: delete the pod
Dec 17 22:56:27.146: INFO: Waiting for pod var-expansion-478bd4a8-2a7e-4eb0-ba91-1f29ae9c20fa to disappear
Dec 17 22:56:27.156: INFO: Pod var-expansion-478bd4a8-2a7e-4eb0-ba91-1f29ae9c20fa no longer exists
[AfterEach] [k8s.io] Variable Expansion
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:56:27.156: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "var-expansion-451" for this suite.
Dec 17 22:56:33.182: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:56:33.277: INFO: namespace var-expansion-451 deletion completed in 6.116422827s

• [SLOW TEST:14.544 seconds]
[k8s.io] Variable Expansion
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should allow substituting values in a container's command [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[k8s.io] Probing container 
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:56:33.278: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
[It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:57:33.697: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-6548" for this suite.
Dec 17 22:58:01.744: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:58:01.864: INFO: namespace container-probe-6548 deletion completed in 28.160045058s

• [SLOW TEST:88.587 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:58:01.865: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 17 22:58:02.762: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 17 22:58:04.786: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220282, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220282, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220282, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220282, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:58:06.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220282, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220282, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220282, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220282, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:58:08.793: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220282, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220282, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220282, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220282, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:58:10.794: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220282, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220282, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220282, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220282, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 17 22:58:13.891: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a validating webhook configuration
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Updating a validating webhook configuration's rules to not include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
STEP: Patching a validating webhook configuration's rules to include the create operation
STEP: Creating a configMap that does not comply to the validation webhook rules
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:58:14.203: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-3023" for this suite.
Dec 17 22:58:20.234: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:58:20.409: INFO: namespace webhook-3023 deletion completed in 6.201147161s
STEP: Destroying namespace "webhook-3023-markers" for this suite.
Dec 17 22:58:26.429: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:58:26.562: INFO: namespace webhook-3023-markers deletion completed in 6.15294044s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:24.740 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  patching/updating a validating webhook should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:58:26.606: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] listing custom resource definition objects works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 22:58:26.714: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:58:31.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-5857" for this suite.
Dec 17 22:58:37.239: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:58:37.383: INFO: namespace custom-resource-definition-5857 deletion completed in 6.207446124s

• [SLOW TEST:10.777 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:42
    listing custom resource definition objects works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[sig-network] Services 
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:58:37.384: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
[It] should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating service endpoint-test2 in namespace services-5679
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5679 to expose endpoints map[]
Dec 17 22:58:37.638: INFO: Get endpoints failed (4.686188ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found
Dec 17 22:58:38.774: INFO: successfully validated that service endpoint-test2 in namespace services-5679 exposes endpoints map[] (1.140151195s elapsed)
STEP: Creating pod pod1 in namespace services-5679
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5679 to expose endpoints map[pod1:[80]]
Dec 17 22:58:42.919: INFO: Unexpected endpoints: found map[], expected map[pod1:[80]] (4.12283395s elapsed, will retry)
Dec 17 22:58:45.971: INFO: successfully validated that service endpoint-test2 in namespace services-5679 exposes endpoints map[pod1:[80]] (7.174609348s elapsed)
STEP: Creating pod pod2 in namespace services-5679
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5679 to expose endpoints map[pod1:[80] pod2:[80]]
Dec 17 22:58:51.371: INFO: Unexpected endpoints: found map[358dab2a-9a24-4426-8cd7-f57f33ef154f:[80]], expected map[pod1:[80] pod2:[80]] (5.389486732s elapsed, will retry)
Dec 17 22:58:53.398: INFO: successfully validated that service endpoint-test2 in namespace services-5679 exposes endpoints map[pod1:[80] pod2:[80]] (7.416719286s elapsed)
STEP: Deleting pod pod1 in namespace services-5679
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5679 to expose endpoints map[pod2:[80]]
Dec 17 22:58:53.441: INFO: successfully validated that service endpoint-test2 in namespace services-5679 exposes endpoints map[pod2:[80]] (27.802301ms elapsed)
STEP: Deleting pod pod2 in namespace services-5679
STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-5679 to expose endpoints map[]
Dec 17 22:58:53.512: INFO: successfully validated that service endpoint-test2 in namespace services-5679 exposes endpoints map[] (51.472142ms elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:58:53.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-5679" for this suite.
Dec 17 22:59:21.649: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:59:21.804: INFO: namespace services-5679 deletion completed in 28.215847416s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95

• [SLOW TEST:44.420 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve a basic endpoint from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:59:21.805: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 17 22:59:22.768: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 17 22:59:24.791: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220362, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220362, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220362, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220362, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:59:26.803: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220362, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220362, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220362, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220362, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 22:59:28.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220362, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220362, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220362, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712220362, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 17 22:59:31.901: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Listing all of the created validation webhooks
STEP: Creating a configMap that should be mutated
STEP: Deleting the collection of validation webhooks
STEP: Creating a configMap that should not be mutated
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:59:33.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-2250" for this suite.
Dec 17 22:59:41.045: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:59:41.162: INFO: namespace webhook-2250 deletion completed in 8.137274894s
STEP: Destroying namespace "webhook-2250-markers" for this suite.
Dec 17 22:59:47.350: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 22:59:47.545: INFO: namespace webhook-2250-markers deletion completed in 6.382786706s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:25.762 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  listing mutating webhooks should work [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Secrets 
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 22:59:47.568: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating secret with name secret-test-4a1e9d7a-6f99-427e-af0c-17ca1fc8f4e9
STEP: Creating a pod to test consume secrets
Dec 17 22:59:47.815: INFO: Waiting up to 5m0s for pod "pod-secrets-adf4a8ea-b19c-4ba3-bec7-6d023f59a106" in namespace "secrets-4765" to be "success or failure"
Dec 17 22:59:47.833: INFO: Pod "pod-secrets-adf4a8ea-b19c-4ba3-bec7-6d023f59a106": Phase="Pending", Reason="", readiness=false. Elapsed: 17.419459ms
Dec 17 22:59:49.863: INFO: Pod "pod-secrets-adf4a8ea-b19c-4ba3-bec7-6d023f59a106": Phase="Pending", Reason="", readiness=false. Elapsed: 2.047885445s
Dec 17 22:59:51.876: INFO: Pod "pod-secrets-adf4a8ea-b19c-4ba3-bec7-6d023f59a106": Phase="Pending", Reason="", readiness=false. Elapsed: 4.061002374s
Dec 17 22:59:53.891: INFO: Pod "pod-secrets-adf4a8ea-b19c-4ba3-bec7-6d023f59a106": Phase="Pending", Reason="", readiness=false. Elapsed: 6.075933703s
Dec 17 22:59:55.905: INFO: Pod "pod-secrets-adf4a8ea-b19c-4ba3-bec7-6d023f59a106": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.089775171s
STEP: Saw pod success
Dec 17 22:59:55.905: INFO: Pod "pod-secrets-adf4a8ea-b19c-4ba3-bec7-6d023f59a106" satisfied condition "success or failure"
Dec 17 22:59:55.911: INFO: Trying to get logs from node jerma-node pod pod-secrets-adf4a8ea-b19c-4ba3-bec7-6d023f59a106 container secret-volume-test: 
STEP: delete the pod
Dec 17 22:59:56.013: INFO: Waiting for pod pod-secrets-adf4a8ea-b19c-4ba3-bec7-6d023f59a106 to disappear
Dec 17 22:59:56.018: INFO: Pod pod-secrets-adf4a8ea-b19c-4ba3-bec7-6d023f59a106 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 22:59:56.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-4765" for this suite.
Dec 17 23:00:02.073: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:00:02.246: INFO: namespace secrets-4765 deletion completed in 6.223906398s
STEP: Destroying namespace "secret-namespace-5620" for this suite.
Dec 17 23:00:08.289: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:00:08.443: INFO: namespace secret-namespace-5620 deletion completed in 6.187358814s

• [SLOW TEST:20.877 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:00:08.449: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 17 23:00:08.612: INFO: Waiting up to 5m0s for pod "pod-5e285a36-7675-4f7c-b9a9-6c9952fa2475" in namespace "emptydir-6665" to be "success or failure"
Dec 17 23:00:08.664: INFO: Pod "pod-5e285a36-7675-4f7c-b9a9-6c9952fa2475": Phase="Pending", Reason="", readiness=false. Elapsed: 50.786949ms
Dec 17 23:00:10.672: INFO: Pod "pod-5e285a36-7675-4f7c-b9a9-6c9952fa2475": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05875967s
Dec 17 23:00:12.680: INFO: Pod "pod-5e285a36-7675-4f7c-b9a9-6c9952fa2475": Phase="Pending", Reason="", readiness=false. Elapsed: 4.067476183s
Dec 17 23:00:15.710: INFO: Pod "pod-5e285a36-7675-4f7c-b9a9-6c9952fa2475": Phase="Pending", Reason="", readiness=false. Elapsed: 7.096763234s
Dec 17 23:00:17.718: INFO: Pod "pod-5e285a36-7675-4f7c-b9a9-6c9952fa2475": Phase="Succeeded", Reason="", readiness=false. Elapsed: 9.105635506s
STEP: Saw pod success
Dec 17 23:00:17.719: INFO: Pod "pod-5e285a36-7675-4f7c-b9a9-6c9952fa2475" satisfied condition "success or failure"
Dec 17 23:00:17.723: INFO: Trying to get logs from node jerma-node pod pod-5e285a36-7675-4f7c-b9a9-6c9952fa2475 container test-container: 
STEP: delete the pod
Dec 17 23:00:17.772: INFO: Waiting for pod pod-5e285a36-7675-4f7c-b9a9-6c9952fa2475 to disappear
Dec 17 23:00:17.780: INFO: Pod pod-5e285a36-7675-4f7c-b9a9-6c9952fa2475 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:00:17.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6665" for this suite.
Dec 17 23:00:23.889: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:00:24.037: INFO: namespace emptydir-6665 deletion completed in 6.251326707s

• [SLOW TEST:15.588 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[sig-network] Proxy version v1 
  should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:00:24.038: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename proxy
STEP: Waiting for a default service account to be provisioned in namespace
[It] should proxy through a service and a pod  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: starting an echo server on multiple ports
STEP: creating replication controller proxy-service-kbc5f in namespace proxy-3292
I1217 23:00:24.325555       8 runners.go:184] Created replication controller with name: proxy-service-kbc5f, namespace: proxy-3292, replica count: 1
I1217 23:00:25.376880       8 runners.go:184] proxy-service-kbc5f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 23:00:26.377779       8 runners.go:184] proxy-service-kbc5f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 23:00:27.378816       8 runners.go:184] proxy-service-kbc5f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 23:00:28.379732       8 runners.go:184] proxy-service-kbc5f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 23:00:29.380734       8 runners.go:184] proxy-service-kbc5f Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 23:00:30.381654       8 runners.go:184] proxy-service-kbc5f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1217 23:00:31.382822       8 runners.go:184] proxy-service-kbc5f Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady 
I1217 23:00:32.383980       8 runners.go:184] proxy-service-kbc5f Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 17 23:00:32.393: INFO: setup took 8.183596194s, starting test cases
STEP: running 16 cases, 20 attempts per case, 320 total attempts
Dec 17 23:00:32.413: INFO: (0) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:1080/proxy/: test<... (200; 20.266778ms)
Dec 17 23:00:32.414: INFO: (0) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:1080/proxy/: ... (200; 20.281248ms)
Dec 17 23:00:32.414: INFO: (0) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 21.302398ms)
Dec 17 23:00:32.415: INFO: (0) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 21.828004ms)
Dec 17 23:00:32.415: INFO: (0) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 21.844266ms)
Dec 17 23:00:32.416: INFO: (0) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf/proxy/: test (200; 22.292655ms)
Dec 17 23:00:32.416: INFO: (0) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 22.991921ms)
Dec 17 23:00:32.417: INFO: (0) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 23.609108ms)
Dec 17 23:00:32.418: INFO: (0) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 24.603153ms)
Dec 17 23:00:32.418: INFO: (0) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname2/proxy/: bar (200; 24.639964ms)
Dec 17 23:00:32.419: INFO: (0) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname1/proxy/: foo (200; 25.070807ms)
Dec 17 23:00:32.421: INFO: (0) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:460/proxy/: tls baz (200; 27.614227ms)
Dec 17 23:00:32.422: INFO: (0) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:462/proxy/: tls qux (200; 28.003813ms)
Dec 17 23:00:32.425: INFO: (0) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: test (200; 13.798564ms)
Dec 17 23:00:32.441: INFO: (1) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: ... (200; 14.655677ms)
Dec 17 23:00:32.441: INFO: (1) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:462/proxy/: tls qux (200; 14.539729ms)
Dec 17 23:00:32.444: INFO: (1) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 17.365844ms)
Dec 17 23:00:32.445: INFO: (1) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 17.90252ms)
Dec 17 23:00:32.445: INFO: (1) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:1080/proxy/: test<... (200; 17.999854ms)
Dec 17 23:00:32.445: INFO: (1) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 17.852194ms)
Dec 17 23:00:32.445: INFO: (1) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname1/proxy/: tls baz (200; 17.947394ms)
Dec 17 23:00:32.445: INFO: (1) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 18.080508ms)
Dec 17 23:00:32.445: INFO: (1) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname2/proxy/: bar (200; 18.049891ms)
Dec 17 23:00:32.445: INFO: (1) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname2/proxy/: tls qux (200; 18.048315ms)
Dec 17 23:00:32.445: INFO: (1) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname1/proxy/: foo (200; 18.292028ms)
Dec 17 23:00:32.453: INFO: (2) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 7.944692ms)
Dec 17 23:00:32.454: INFO: (2) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:1080/proxy/: ... (200; 8.20215ms)
Dec 17 23:00:32.455: INFO: (2) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 9.204643ms)
Dec 17 23:00:32.456: INFO: (2) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:460/proxy/: tls baz (200; 10.046591ms)
Dec 17 23:00:32.457: INFO: (2) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:462/proxy/: tls qux (200; 10.899915ms)
Dec 17 23:00:32.457: INFO: (2) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 11.124228ms)
Dec 17 23:00:32.457: INFO: (2) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: test (200; 11.023624ms)
Dec 17 23:00:32.457: INFO: (2) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:1080/proxy/: test<... (200; 11.348067ms)
Dec 17 23:00:32.457: INFO: (2) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 11.7588ms)
Dec 17 23:00:32.460: INFO: (2) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname1/proxy/: foo (200; 14.546492ms)
Dec 17 23:00:32.461: INFO: (2) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname2/proxy/: tls qux (200; 14.706815ms)
Dec 17 23:00:32.461: INFO: (2) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 15.346862ms)
Dec 17 23:00:32.462: INFO: (2) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 16.154731ms)
Dec 17 23:00:32.462: INFO: (2) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname1/proxy/: tls baz (200; 16.074416ms)
Dec 17 23:00:32.462: INFO: (2) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname2/proxy/: bar (200; 16.283375ms)
Dec 17 23:00:32.477: INFO: (3) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:462/proxy/: tls qux (200; 13.946556ms)
Dec 17 23:00:32.477: INFO: (3) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 14.582005ms)
Dec 17 23:00:32.477: INFO: (3) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:1080/proxy/: test<... (200; 14.741212ms)
Dec 17 23:00:32.478: INFO: (3) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:460/proxy/: tls baz (200; 14.530428ms)
Dec 17 23:00:32.478: INFO: (3) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 14.95302ms)
Dec 17 23:00:32.478: INFO: (3) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:1080/proxy/: ... (200; 15.284442ms)
Dec 17 23:00:32.478: INFO: (3) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 14.791185ms)
Dec 17 23:00:32.478: INFO: (3) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname1/proxy/: tls baz (200; 15.019881ms)
Dec 17 23:00:32.479: INFO: (3) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 16.926917ms)
Dec 17 23:00:32.479: INFO: (3) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf/proxy/: test (200; 16.252743ms)
Dec 17 23:00:32.479: INFO: (3) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 16.318192ms)
Dec 17 23:00:32.479: INFO: (3) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 16.236263ms)
Dec 17 23:00:32.479: INFO: (3) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname1/proxy/: foo (200; 16.046043ms)
Dec 17 23:00:32.479: INFO: (3) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname2/proxy/: bar (200; 16.500389ms)
Dec 17 23:00:32.479: INFO: (3) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: test<... (200; 17.272341ms)
Dec 17 23:00:32.500: INFO: (4) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 17.859591ms)
Dec 17 23:00:32.500: INFO: (4) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 17.992213ms)
Dec 17 23:00:32.500: INFO: (4) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname2/proxy/: bar (200; 17.783866ms)
Dec 17 23:00:32.501: INFO: (4) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 18.266187ms)
Dec 17 23:00:32.501: INFO: (4) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname2/proxy/: tls qux (200; 19.265333ms)
Dec 17 23:00:32.502: INFO: (4) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: ... (200; 19.603732ms)
Dec 17 23:00:32.502: INFO: (4) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf/proxy/: test (200; 19.371001ms)
Dec 17 23:00:32.502: INFO: (4) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:462/proxy/: tls qux (200; 19.910401ms)
Dec 17 23:00:32.502: INFO: (4) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname1/proxy/: foo (200; 19.77894ms)
Dec 17 23:00:32.502: INFO: (4) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 20.21081ms)
Dec 17 23:00:32.503: INFO: (4) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 9.776798ms)
Dec 17 23:00:32.508: INFO: (5) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 4.106084ms)
Dec 17 23:00:32.509: INFO: (5) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:1080/proxy/: test<... (200; 5.183009ms)
Dec 17 23:00:32.510: INFO: (5) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:1080/proxy/: ... (200; 6.909588ms)
Dec 17 23:00:32.512: INFO: (5) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 8.841145ms)
Dec 17 23:00:32.512: INFO: (5) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname2/proxy/: bar (200; 8.756451ms)
Dec 17 23:00:32.513: INFO: (5) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:462/proxy/: tls qux (200; 9.573833ms)
Dec 17 23:00:32.513: INFO: (5) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname1/proxy/: foo (200; 10.17856ms)
Dec 17 23:00:32.515: INFO: (5) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: test (200; 16.536647ms)
Dec 17 23:00:32.520: INFO: (5) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname2/proxy/: tls qux (200; 16.607499ms)
Dec 17 23:00:32.520: INFO: (5) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 16.49427ms)
Dec 17 23:00:32.520: INFO: (5) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname1/proxy/: tls baz (200; 17.43486ms)
Dec 17 23:00:32.520: INFO: (5) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:460/proxy/: tls baz (200; 16.692906ms)
Dec 17 23:00:32.520: INFO: (5) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 16.743987ms)
Dec 17 23:00:32.520: INFO: (5) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 17.373041ms)
Dec 17 23:00:32.528: INFO: (6) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 6.540834ms)
Dec 17 23:00:32.529: INFO: (6) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf/proxy/: test (200; 7.584315ms)
Dec 17 23:00:32.531: INFO: (6) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:460/proxy/: tls baz (200; 9.936604ms)
Dec 17 23:00:32.531: INFO: (6) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 9.602866ms)
Dec 17 23:00:32.532: INFO: (6) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 10.660466ms)
Dec 17 23:00:32.532: INFO: (6) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname1/proxy/: foo (200; 11.626465ms)
Dec 17 23:00:32.535: INFO: (6) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 13.872125ms)
Dec 17 23:00:32.535: INFO: (6) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:1080/proxy/: ... (200; 13.62405ms)
Dec 17 23:00:32.535: INFO: (6) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname1/proxy/: tls baz (200; 13.843019ms)
Dec 17 23:00:32.535: INFO: (6) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:462/proxy/: tls qux (200; 14.715456ms)
Dec 17 23:00:32.535: INFO: (6) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 14.146867ms)
Dec 17 23:00:32.536: INFO: (6) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:1080/proxy/: test<... (200; 14.107848ms)
Dec 17 23:00:32.536: INFO: (6) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname2/proxy/: tls qux (200; 15.834821ms)
Dec 17 23:00:32.537: INFO: (6) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: test<... (200; 11.163917ms)
Dec 17 23:00:32.553: INFO: (7) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 11.882182ms)
Dec 17 23:00:32.554: INFO: (7) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:1080/proxy/: ... (200; 12.797943ms)
Dec 17 23:00:32.556: INFO: (7) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 14.419362ms)
Dec 17 23:00:32.556: INFO: (7) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf/proxy/: test (200; 14.838776ms)
Dec 17 23:00:32.558: INFO: (7) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 17.146089ms)
Dec 17 23:00:32.558: INFO: (7) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 16.50151ms)
Dec 17 23:00:32.559: INFO: (7) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 17.131729ms)
Dec 17 23:00:32.559: INFO: (7) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname1/proxy/: foo (200; 17.569508ms)
Dec 17 23:00:32.559: INFO: (7) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: test<... (200; 8.372987ms)
Dec 17 23:00:32.573: INFO: (8) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 11.529775ms)
Dec 17 23:00:32.573: INFO: (8) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:460/proxy/: tls baz (200; 11.828048ms)
Dec 17 23:00:32.574: INFO: (8) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname1/proxy/: tls baz (200; 13.01982ms)
Dec 17 23:00:32.575: INFO: (8) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname1/proxy/: foo (200; 14.087357ms)
Dec 17 23:00:32.575: INFO: (8) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 13.72497ms)
Dec 17 23:00:32.575: INFO: (8) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 13.758876ms)
Dec 17 23:00:32.575: INFO: (8) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 13.650851ms)
Dec 17 23:00:32.575: INFO: (8) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf/proxy/: test (200; 13.724721ms)
Dec 17 23:00:32.575: INFO: (8) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname2/proxy/: tls qux (200; 14.529602ms)
Dec 17 23:00:32.575: INFO: (8) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:1080/proxy/: ... (200; 14.037339ms)
Dec 17 23:00:32.576: INFO: (8) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 14.762483ms)
Dec 17 23:00:32.576: INFO: (8) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname2/proxy/: bar (200; 15.580589ms)
Dec 17 23:00:32.590: INFO: (9) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:1080/proxy/: ... (200; 11.600848ms)
Dec 17 23:00:32.590: INFO: (9) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:460/proxy/: tls baz (200; 10.561427ms)
Dec 17 23:00:32.590: INFO: (9) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 13.603656ms)
Dec 17 23:00:32.591: INFO: (9) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 14.52301ms)
Dec 17 23:00:32.591: INFO: (9) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf/proxy/: test (200; 13.750231ms)
Dec 17 23:00:32.591: INFO: (9) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:1080/proxy/: test<... (200; 12.31608ms)
Dec 17 23:00:32.591: INFO: (9) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 11.957704ms)
Dec 17 23:00:32.593: INFO: (9) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 13.480715ms)
Dec 17 23:00:32.595: INFO: (9) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: test<... (200; 16.400787ms)
Dec 17 23:00:32.616: INFO: (10) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname1/proxy/: tls baz (200; 16.275047ms)
Dec 17 23:00:32.616: INFO: (10) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:1080/proxy/: ... (200; 16.834538ms)
Dec 17 23:00:32.616: INFO: (10) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:462/proxy/: tls qux (200; 16.901831ms)
Dec 17 23:00:32.617: INFO: (10) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname2/proxy/: tls qux (200; 16.975061ms)
Dec 17 23:00:32.617: INFO: (10) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf/proxy/: test (200; 16.654629ms)
Dec 17 23:00:32.617: INFO: (10) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 16.405733ms)
Dec 17 23:00:32.618: INFO: (10) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 17.259032ms)
Dec 17 23:00:32.618: INFO: (10) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 17.376924ms)
Dec 17 23:00:32.618: INFO: (10) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname2/proxy/: bar (200; 17.346295ms)
Dec 17 23:00:32.619: INFO: (10) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 18.369041ms)
Dec 17 23:00:32.628: INFO: (11) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf/proxy/: test (200; 9.091382ms)
Dec 17 23:00:32.628: INFO: (11) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 6.901135ms)
Dec 17 23:00:32.628: INFO: (11) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 7.598763ms)
Dec 17 23:00:32.631: INFO: (11) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 9.332052ms)
Dec 17 23:00:32.631: INFO: (11) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:460/proxy/: tls baz (200; 9.952868ms)
Dec 17 23:00:32.637: INFO: (11) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: test<... (200; 18.980121ms)
Dec 17 23:00:32.640: INFO: (11) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname1/proxy/: foo (200; 19.475723ms)
Dec 17 23:00:32.640: INFO: (11) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname1/proxy/: tls baz (200; 20.962927ms)
Dec 17 23:00:32.641: INFO: (11) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname2/proxy/: tls qux (200; 20.986407ms)
Dec 17 23:00:32.641: INFO: (11) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 19.54139ms)
Dec 17 23:00:32.641: INFO: (11) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:1080/proxy/: ... (200; 22.027012ms)
Dec 17 23:00:32.641: INFO: (11) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:462/proxy/: tls qux (200; 21.633819ms)
Dec 17 23:00:32.641: INFO: (11) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 20.292575ms)
Dec 17 23:00:32.641: INFO: (11) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 21.728907ms)
Dec 17 23:00:32.648: INFO: (12) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 6.037118ms)
Dec 17 23:00:32.648: INFO: (12) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:1080/proxy/: test<... (200; 6.393539ms)
Dec 17 23:00:32.649: INFO: (12) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 5.228594ms)
Dec 17 23:00:32.650: INFO: (12) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 7.789701ms)
Dec 17 23:00:32.653: INFO: (12) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:1080/proxy/: ... (200; 8.009108ms)
Dec 17 23:00:32.653: INFO: (12) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname2/proxy/: tls qux (200; 11.543433ms)
Dec 17 23:00:32.654: INFO: (12) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname2/proxy/: bar (200; 11.199026ms)
Dec 17 23:00:32.655: INFO: (12) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:462/proxy/: tls qux (200; 10.785072ms)
Dec 17 23:00:32.655: INFO: (12) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname1/proxy/: foo (200; 9.963274ms)
Dec 17 23:00:32.655: INFO: (12) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 12.272377ms)
Dec 17 23:00:32.655: INFO: (12) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: test (200; 11.405919ms)
Dec 17 23:00:32.656: INFO: (12) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname1/proxy/: tls baz (200; 11.70269ms)
Dec 17 23:00:32.656: INFO: (12) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 12.789895ms)
Dec 17 23:00:32.657: INFO: (12) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 13.138355ms)
Dec 17 23:00:32.671: INFO: (13) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:1080/proxy/: ... (200; 13.643717ms)
Dec 17 23:00:32.672: INFO: (13) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: test<... (200; 13.778551ms)
Dec 17 23:00:32.672: INFO: (13) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf/proxy/: test (200; 14.840568ms)
Dec 17 23:00:32.672: INFO: (13) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:462/proxy/: tls qux (200; 15.265162ms)
Dec 17 23:00:32.676: INFO: (13) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname1/proxy/: tls baz (200; 18.638564ms)
Dec 17 23:00:32.677: INFO: (13) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 19.607151ms)
Dec 17 23:00:32.677: INFO: (13) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname2/proxy/: bar (200; 17.708938ms)
Dec 17 23:00:32.677: INFO: (13) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 17.35184ms)
Dec 17 23:00:32.677: INFO: (13) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 17.722579ms)
Dec 17 23:00:32.678: INFO: (13) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname1/proxy/: foo (200; 19.292054ms)
Dec 17 23:00:32.679: INFO: (13) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname2/proxy/: tls qux (200; 20.115811ms)
Dec 17 23:00:32.701: INFO: (14) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 20.258918ms)
Dec 17 23:00:32.701: INFO: (14) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:1080/proxy/: ... (200; 20.687285ms)
Dec 17 23:00:32.701: INFO: (14) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname2/proxy/: bar (200; 21.427302ms)
Dec 17 23:00:32.701: INFO: (14) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: test<... (200; 21.601552ms)
Dec 17 23:00:32.704: INFO: (14) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname1/proxy/: foo (200; 24.518168ms)
Dec 17 23:00:32.704: INFO: (14) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 24.117474ms)
Dec 17 23:00:32.705: INFO: (14) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:460/proxy/: tls baz (200; 24.204886ms)
Dec 17 23:00:32.705: INFO: (14) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf/proxy/: test (200; 24.481075ms)
Dec 17 23:00:32.705: INFO: (14) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 24.458237ms)
Dec 17 23:00:32.705: INFO: (14) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname1/proxy/: tls baz (200; 24.903009ms)
Dec 17 23:00:32.705: INFO: (14) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 25.601272ms)
Dec 17 23:00:32.705: INFO: (14) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 25.431655ms)
Dec 17 23:00:32.706: INFO: (14) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:462/proxy/: tls qux (200; 26.001956ms)
Dec 17 23:00:32.706: INFO: (14) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 26.454022ms)
Dec 17 23:00:32.706: INFO: (14) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname2/proxy/: tls qux (200; 25.742794ms)
Dec 17 23:00:32.716: INFO: (15) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: ... (200; 10.109513ms)
Dec 17 23:00:32.719: INFO: (15) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf/proxy/: test (200; 12.624681ms)
Dec 17 23:00:32.720: INFO: (15) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:460/proxy/: tls baz (200; 12.720138ms)
Dec 17 23:00:32.720: INFO: (15) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 12.746492ms)
Dec 17 23:00:32.720: INFO: (15) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 12.902662ms)
Dec 17 23:00:32.720: INFO: (15) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:1080/proxy/: test<... (200; 12.995178ms)
Dec 17 23:00:32.720: INFO: (15) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:462/proxy/: tls qux (200; 13.735136ms)
Dec 17 23:00:32.720: INFO: (15) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname2/proxy/: bar (200; 14.108367ms)
Dec 17 23:00:32.720: INFO: (15) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname1/proxy/: foo (200; 13.79242ms)
Dec 17 23:00:32.722: INFO: (15) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 15.598576ms)
Dec 17 23:00:32.723: INFO: (15) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 16.068524ms)
Dec 17 23:00:32.723: INFO: (15) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 16.953523ms)
Dec 17 23:00:32.723: INFO: (15) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname2/proxy/: tls qux (200; 17.180633ms)
Dec 17 23:00:32.724: INFO: (15) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 17.063695ms)
Dec 17 23:00:32.725: INFO: (15) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname1/proxy/: tls baz (200; 18.469121ms)
Dec 17 23:00:32.738: INFO: (16) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 12.568866ms)
Dec 17 23:00:32.740: INFO: (16) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 14.623574ms)
Dec 17 23:00:32.740: INFO: (16) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:462/proxy/: tls qux (200; 14.911152ms)
Dec 17 23:00:32.740: INFO: (16) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf/proxy/: test (200; 14.900138ms)
Dec 17 23:00:32.745: INFO: (16) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 19.557046ms)
Dec 17 23:00:32.745: INFO: (16) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:1080/proxy/: ... (200; 19.732498ms)
Dec 17 23:00:32.745: INFO: (16) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname2/proxy/: tls qux (200; 19.921865ms)
Dec 17 23:00:32.745: INFO: (16) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 19.731348ms)
Dec 17 23:00:32.746: INFO: (16) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 20.336119ms)
Dec 17 23:00:32.748: INFO: (16) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:1080/proxy/: test<... (200; 22.531097ms)
Dec 17 23:00:32.748: INFO: (16) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname1/proxy/: tls baz (200; 23.343428ms)
Dec 17 23:00:32.748: INFO: (16) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 22.80956ms)
Dec 17 23:00:32.749: INFO: (16) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname2/proxy/: bar (200; 22.904923ms)
Dec 17 23:00:32.749: INFO: (16) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: test<... (200; 22.806827ms)
Dec 17 23:00:32.776: INFO: (17) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 25.665445ms)
Dec 17 23:00:32.777: INFO: (17) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:462/proxy/: tls qux (200; 26.971909ms)
Dec 17 23:00:32.777: INFO: (17) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname2/proxy/: bar (200; 26.592074ms)
Dec 17 23:00:32.777: INFO: (17) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:1080/proxy/: ... (200; 26.545723ms)
Dec 17 23:00:32.777: INFO: (17) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname2/proxy/: tls qux (200; 26.763834ms)
Dec 17 23:00:32.777: INFO: (17) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf/proxy/: test (200; 27.340437ms)
Dec 17 23:00:32.778: INFO: (17) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: test<... (200; 9.867417ms)
Dec 17 23:00:32.790: INFO: (18) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:460/proxy/: tls baz (200; 9.873364ms)
Dec 17 23:00:32.790: INFO: (18) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 10.511092ms)
Dec 17 23:00:32.791: INFO: (18) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf/proxy/: test (200; 11.04241ms)
Dec 17 23:00:32.791: INFO: (18) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:162/proxy/: bar (200; 11.171855ms)
Dec 17 23:00:32.792: INFO: (18) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: ... (200; 11.765392ms)
Dec 17 23:00:32.792: INFO: (18) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname1/proxy/: foo (200; 12.521918ms)
Dec 17 23:00:32.792: INFO: (18) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname2/proxy/: bar (200; 13.17826ms)
Dec 17 23:00:32.793: INFO: (18) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname2/proxy/: tls qux (200; 13.083624ms)
Dec 17 23:00:32.794: INFO: (18) /api/v1/namespaces/proxy-3292/services/https:proxy-service-kbc5f:tlsportname1/proxy/: tls baz (200; 14.393119ms)
Dec 17 23:00:32.794: INFO: (18) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 14.542428ms)
Dec 17 23:00:32.795: INFO: (18) /api/v1/namespaces/proxy-3292/services/http:proxy-service-kbc5f:portname1/proxy/: foo (200; 15.080242ms)
Dec 17 23:00:32.807: INFO: (19) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:1080/proxy/: ... (200; 10.851082ms)
Dec 17 23:00:32.808: INFO: (19) /api/v1/namespaces/proxy-3292/pods/http:proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 12.115617ms)
Dec 17 23:00:32.808: INFO: (19) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf/proxy/: test (200; 12.145806ms)
Dec 17 23:00:32.808: INFO: (19) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:462/proxy/: tls qux (200; 11.416367ms)
Dec 17 23:00:32.809: INFO: (19) /api/v1/namespaces/proxy-3292/services/proxy-service-kbc5f:portname2/proxy/: bar (200; 12.127382ms)
Dec 17 23:00:32.809: INFO: (19) /api/v1/namespaces/proxy-3292/pods/proxy-service-kbc5f-mkqnf:160/proxy/: foo (200; 12.373816ms)
Dec 17 23:00:32.809: INFO: (19) /api/v1/namespaces/proxy-3292/pods/https:proxy-service-kbc5f-mkqnf:443/proxy/: test<... (200; 16.699914ms)
STEP: deleting ReplicationController proxy-service-kbc5f in namespace proxy-3292, will wait for the garbage collector to delete the pods
Dec 17 23:00:32.893: INFO: Deleting ReplicationController proxy-service-kbc5f took: 19.902816ms
Dec 17 23:00:33.194: INFO: Terminating ReplicationController proxy-service-kbc5f pods took: 301.220115ms
[AfterEach] version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:00:46.695: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "proxy-3292" for this suite.
Dec 17 23:00:52.728: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:00:52.905: INFO: namespace proxy-3292 deletion completed in 6.201075278s

• [SLOW TEST:28.867 seconds]
[sig-network] Proxy
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  version v1
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57
    should proxy through a service and a pod  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:00:52.906: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 23:00:53.001: INFO: Waiting up to 5m0s for pod "downwardapi-volume-b5f0c80e-4433-41d8-bcaa-e836c84383e9" in namespace "projected-4821" to be "success or failure"
Dec 17 23:00:53.025: INFO: Pod "downwardapi-volume-b5f0c80e-4433-41d8-bcaa-e836c84383e9": Phase="Pending", Reason="", readiness=false. Elapsed: 23.355291ms
Dec 17 23:00:55.035: INFO: Pod "downwardapi-volume-b5f0c80e-4433-41d8-bcaa-e836c84383e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.03325045s
Dec 17 23:00:57.043: INFO: Pod "downwardapi-volume-b5f0c80e-4433-41d8-bcaa-e836c84383e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.041319453s
Dec 17 23:00:59.056: INFO: Pod "downwardapi-volume-b5f0c80e-4433-41d8-bcaa-e836c84383e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.054831047s
Dec 17 23:01:01.070: INFO: Pod "downwardapi-volume-b5f0c80e-4433-41d8-bcaa-e836c84383e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.068587296s
STEP: Saw pod success
Dec 17 23:01:01.070: INFO: Pod "downwardapi-volume-b5f0c80e-4433-41d8-bcaa-e836c84383e9" satisfied condition "success or failure"
Dec 17 23:01:01.147: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-b5f0c80e-4433-41d8-bcaa-e836c84383e9 container client-container: 
STEP: delete the pod
Dec 17 23:01:01.197: INFO: Waiting for pod downwardapi-volume-b5f0c80e-4433-41d8-bcaa-e836c84383e9 to disappear
Dec 17 23:01:01.212: INFO: Pod downwardapi-volume-b5f0c80e-4433-41d8-bcaa-e836c84383e9 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:01:01.212: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4821" for this suite.
Dec 17 23:01:07.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:01:07.401: INFO: namespace projected-4821 deletion completed in 6.183860352s

• [SLOW TEST:14.495 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Security Context When creating a container with runAsUser 
  should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:01:07.403: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename security-context-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:40
[It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 23:01:07.535: INFO: Waiting up to 5m0s for pod "busybox-user-65534-049968ec-b989-4af4-a174-1280f0c2da79" in namespace "security-context-test-2245" to be "success or failure"
Dec 17 23:01:07.539: INFO: Pod "busybox-user-65534-049968ec-b989-4af4-a174-1280f0c2da79": Phase="Pending", Reason="", readiness=false. Elapsed: 3.938019ms
Dec 17 23:01:09.548: INFO: Pod "busybox-user-65534-049968ec-b989-4af4-a174-1280f0c2da79": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012184378s
Dec 17 23:01:11.556: INFO: Pod "busybox-user-65534-049968ec-b989-4af4-a174-1280f0c2da79": Phase="Pending", Reason="", readiness=false. Elapsed: 4.020698965s
Dec 17 23:01:13.565: INFO: Pod "busybox-user-65534-049968ec-b989-4af4-a174-1280f0c2da79": Phase="Pending", Reason="", readiness=false. Elapsed: 6.029361451s
Dec 17 23:01:15.574: INFO: Pod "busybox-user-65534-049968ec-b989-4af4-a174-1280f0c2da79": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.038322715s
Dec 17 23:01:15.574: INFO: Pod "busybox-user-65534-049968ec-b989-4af4-a174-1280f0c2da79" satisfied condition "success or failure"
[AfterEach] [k8s.io] Security Context
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:01:15.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "security-context-test-2245" for this suite.
Dec 17 23:01:21.619: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:01:21.771: INFO: namespace security-context-test-2245 deletion completed in 6.188626735s

• [SLOW TEST:14.369 seconds]
[k8s.io] Security Context
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  When creating a container with runAsUser
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:44
    should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSS
------------------------------
[sig-node] ConfigMap 
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:01:21.772: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap configmap-5757/configmap-test-07347139-f39f-4dfc-84f9-b839a1099f82
STEP: Creating a pod to test consume configMaps
Dec 17 23:01:21.936: INFO: Waiting up to 5m0s for pod "pod-configmaps-041c649d-0c59-4335-9fa7-5adc591f5820" in namespace "configmap-5757" to be "success or failure"
Dec 17 23:01:21.983: INFO: Pod "pod-configmaps-041c649d-0c59-4335-9fa7-5adc591f5820": Phase="Pending", Reason="", readiness=false. Elapsed: 46.910075ms
Dec 17 23:01:23.993: INFO: Pod "pod-configmaps-041c649d-0c59-4335-9fa7-5adc591f5820": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056156832s
Dec 17 23:01:26.002: INFO: Pod "pod-configmaps-041c649d-0c59-4335-9fa7-5adc591f5820": Phase="Pending", Reason="", readiness=false. Elapsed: 4.065041011s
Dec 17 23:01:28.013: INFO: Pod "pod-configmaps-041c649d-0c59-4335-9fa7-5adc591f5820": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076354905s
Dec 17 23:01:30.021: INFO: Pod "pod-configmaps-041c649d-0c59-4335-9fa7-5adc591f5820": Phase="Pending", Reason="", readiness=false. Elapsed: 8.084805672s
Dec 17 23:01:32.029: INFO: Pod "pod-configmaps-041c649d-0c59-4335-9fa7-5adc591f5820": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.092118287s
STEP: Saw pod success
Dec 17 23:01:32.029: INFO: Pod "pod-configmaps-041c649d-0c59-4335-9fa7-5adc591f5820" satisfied condition "success or failure"
Dec 17 23:01:32.033: INFO: Trying to get logs from node jerma-node pod pod-configmaps-041c649d-0c59-4335-9fa7-5adc591f5820 container env-test: 
STEP: delete the pod
Dec 17 23:01:32.171: INFO: Waiting for pod pod-configmaps-041c649d-0c59-4335-9fa7-5adc591f5820 to disappear
Dec 17 23:01:32.189: INFO: Pod pod-configmaps-041c649d-0c59-4335-9fa7-5adc591f5820 no longer exists
[AfterEach] [sig-node] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:01:32.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-5757" for this suite.
Dec 17 23:01:38.237: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:01:38.398: INFO: namespace configmap-5757 deletion completed in 6.189705146s

• [SLOW TEST:16.626 seconds]
[sig-node] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:32
  should be consumable via environment variable [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:01:38.400: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir volume type on node default medium
Dec 17 23:01:38.538: INFO: Waiting up to 5m0s for pod "pod-f7a7b143-c9b1-400c-94af-0e42b1385bcb" in namespace "emptydir-659" to be "success or failure"
Dec 17 23:01:38.590: INFO: Pod "pod-f7a7b143-c9b1-400c-94af-0e42b1385bcb": Phase="Pending", Reason="", readiness=false. Elapsed: 51.757129ms
Dec 17 23:01:40.609: INFO: Pod "pod-f7a7b143-c9b1-400c-94af-0e42b1385bcb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.07060981s
Dec 17 23:01:42.629: INFO: Pod "pod-f7a7b143-c9b1-400c-94af-0e42b1385bcb": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09069498s
Dec 17 23:01:44.652: INFO: Pod "pod-f7a7b143-c9b1-400c-94af-0e42b1385bcb": Phase="Pending", Reason="", readiness=false. Elapsed: 6.113769278s
Dec 17 23:01:46.658: INFO: Pod "pod-f7a7b143-c9b1-400c-94af-0e42b1385bcb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.119704103s
STEP: Saw pod success
Dec 17 23:01:46.658: INFO: Pod "pod-f7a7b143-c9b1-400c-94af-0e42b1385bcb" satisfied condition "success or failure"
Dec 17 23:01:46.662: INFO: Trying to get logs from node jerma-node pod pod-f7a7b143-c9b1-400c-94af-0e42b1385bcb container test-container: 
STEP: delete the pod
Dec 17 23:01:46.786: INFO: Waiting for pod pod-f7a7b143-c9b1-400c-94af-0e42b1385bcb to disappear
Dec 17 23:01:46.797: INFO: Pod pod-f7a7b143-c9b1-400c-94af-0e42b1385bcb no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:01:46.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-659" for this suite.
Dec 17 23:01:52.844: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:01:52.944: INFO: namespace emptydir-659 deletion completed in 6.132783277s

• [SLOW TEST:14.544 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:01:52.944: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation
Dec 17 23:01:53.139: INFO: >>> kubeConfig: /root/.kube/config
STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation
Dec 17 23:02:08.201: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 23:02:12.245: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:02:27.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-2933" for this suite.
Dec 17 23:02:33.236: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:02:33.393: INFO: namespace crd-publish-openapi-2933 deletion completed in 6.200435019s

• [SLOW TEST:40.449 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for multiple CRDs of same group but different versions [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:02:33.394: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a test headless service
STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8818.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-8818.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8818.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8818.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8818.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-8818.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8818.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-8818.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8818.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8818.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-8818.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-8818.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-8818.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-8818.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-8818.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-8818.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-8818.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8818.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 17 23:02:47.658: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-8818.svc.cluster.local from pod dns-8818/dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573: the server could not find the requested resource (get pods dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573)
Dec 17 23:02:47.667: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8818.svc.cluster.local from pod dns-8818/dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573: the server could not find the requested resource (get pods dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573)
Dec 17 23:02:47.677: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-8818.svc.cluster.local from pod dns-8818/dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573: the server could not find the requested resource (get pods dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573)
Dec 17 23:02:47.686: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-8818.svc.cluster.local from pod dns-8818/dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573: the server could not find the requested resource (get pods dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573)
Dec 17 23:02:47.733: INFO: Unable to read wheezy_udp@PodARecord from pod dns-8818/dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573: the server could not find the requested resource (get pods dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573)
Dec 17 23:02:47.740: INFO: Unable to read wheezy_tcp@PodARecord from pod dns-8818/dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573: the server could not find the requested resource (get pods dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573)
Dec 17 23:02:47.749: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-8818.svc.cluster.local from pod dns-8818/dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573: the server could not find the requested resource (get pods dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573)
Dec 17 23:02:47.755: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-8818.svc.cluster.local from pod dns-8818/dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573: the server could not find the requested resource (get pods dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573)
Dec 17 23:02:47.761: INFO: Unable to read jessie_udp@dns-test-service-2.dns-8818.svc.cluster.local from pod dns-8818/dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573: the server could not find the requested resource (get pods dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573)
Dec 17 23:02:47.765: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-8818.svc.cluster.local from pod dns-8818/dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573: the server could not find the requested resource (get pods dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573)
Dec 17 23:02:47.775: INFO: Unable to read jessie_udp@PodARecord from pod dns-8818/dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573: the server could not find the requested resource (get pods dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573)
Dec 17 23:02:47.784: INFO: Unable to read jessie_tcp@PodARecord from pod dns-8818/dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573: the server could not find the requested resource (get pods dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573)
Dec 17 23:02:47.784: INFO: Lookups using dns-8818/dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-8818.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-8818.svc.cluster.local wheezy_udp@dns-test-service-2.dns-8818.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-8818.svc.cluster.local wheezy_udp@PodARecord wheezy_tcp@PodARecord jessie_udp@dns-querier-2.dns-test-service-2.dns-8818.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-8818.svc.cluster.local jessie_udp@dns-test-service-2.dns-8818.svc.cluster.local jessie_tcp@dns-test-service-2.dns-8818.svc.cluster.local jessie_udp@PodARecord jessie_tcp@PodARecord]

Dec 17 23:02:52.886: INFO: DNS probes using dns-8818/dns-test-6a8a9c0f-418c-4ad8-9150-9e7d61035573 succeeded

STEP: deleting the pod
STEP: deleting the test headless service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:02:53.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-8818" for this suite.
Dec 17 23:02:59.286: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:02:59.403: INFO: namespace dns-8818 deletion completed in 6.203885024s

• [SLOW TEST:26.009 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for pods for Subdomain [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:02:59.405: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name projected-configmap-test-volume-968a36c1-a1ba-4904-bf25-cf1bbd5b8a86
STEP: Creating a pod to test consume configMaps
Dec 17 23:02:59.526: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-667fd8fe-9d7f-4628-b564-7aee8e7f8b52" in namespace "projected-142" to be "success or failure"
Dec 17 23:02:59.538: INFO: Pod "pod-projected-configmaps-667fd8fe-9d7f-4628-b564-7aee8e7f8b52": Phase="Pending", Reason="", readiness=false. Elapsed: 10.582364ms
Dec 17 23:03:01.549: INFO: Pod "pod-projected-configmaps-667fd8fe-9d7f-4628-b564-7aee8e7f8b52": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022008603s
Dec 17 23:03:03.560: INFO: Pod "pod-projected-configmaps-667fd8fe-9d7f-4628-b564-7aee8e7f8b52": Phase="Pending", Reason="", readiness=false. Elapsed: 4.032662609s
Dec 17 23:03:05.567: INFO: Pod "pod-projected-configmaps-667fd8fe-9d7f-4628-b564-7aee8e7f8b52": Phase="Pending", Reason="", readiness=false. Elapsed: 6.039843233s
Dec 17 23:03:07.578: INFO: Pod "pod-projected-configmaps-667fd8fe-9d7f-4628-b564-7aee8e7f8b52": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.051310869s
STEP: Saw pod success
Dec 17 23:03:07.578: INFO: Pod "pod-projected-configmaps-667fd8fe-9d7f-4628-b564-7aee8e7f8b52" satisfied condition "success or failure"
Dec 17 23:03:07.583: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-667fd8fe-9d7f-4628-b564-7aee8e7f8b52 container projected-configmap-volume-test: 
STEP: delete the pod
Dec 17 23:03:07.657: INFO: Waiting for pod pod-projected-configmaps-667fd8fe-9d7f-4628-b564-7aee8e7f8b52 to disappear
Dec 17 23:03:07.664: INFO: Pod pod-projected-configmaps-667fd8fe-9d7f-4628-b564-7aee8e7f8b52 no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:03:07.664: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-142" for this suite.
Dec 17 23:03:13.827: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:03:14.016: INFO: namespace projected-142 deletion completed in 6.3442416s

• [SLOW TEST:14.612 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:03:14.018: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77
STEP: Creating service test in namespace statefulset-3773
[It] should have a working scale subresource [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating statefulset ss in namespace statefulset-3773
Dec 17 23:03:14.248: INFO: Found 0 stateful pods, waiting for 1
Dec 17 23:03:24.257: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: getting scale subresource
STEP: updating a scale subresource
STEP: verifying the statefulset Spec.Replicas was modified
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
Dec 17 23:03:24.284: INFO: Deleting all statefulset in ns statefulset-3773
Dec 17 23:03:24.303: INFO: Scaling statefulset ss to 0
Dec 17 23:03:34.448: INFO: Waiting for statefulset status.replicas updated to 0
Dec 17 23:03:34.453: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:03:34.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3773" for this suite.
Dec 17 23:03:40.551: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:03:40.696: INFO: namespace statefulset-3773 deletion completed in 6.173740952s

• [SLOW TEST:26.678 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
    should have a working scale subresource [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSS
------------------------------
[sig-apps] Job 
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:03:40.697: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename job
STEP: Waiting for a default service account to be provisioned in namespace
[It] should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a job
STEP: Ensuring active pods == parallelism
STEP: Orphaning one of the Job's Pods
Dec 17 23:03:51.315: INFO: Successfully updated pod "adopt-release-jxztf"
STEP: Checking that the Job readopts the Pod
Dec 17 23:03:51.316: INFO: Waiting up to 15m0s for pod "adopt-release-jxztf" in namespace "job-7437" to be "adopted"
Dec 17 23:03:51.335: INFO: Pod "adopt-release-jxztf": Phase="Running", Reason="", readiness=true. Elapsed: 19.075288ms
Dec 17 23:03:53.345: INFO: Pod "adopt-release-jxztf": Phase="Running", Reason="", readiness=true. Elapsed: 2.028846504s
Dec 17 23:03:53.345: INFO: Pod "adopt-release-jxztf" satisfied condition "adopted"
STEP: Removing the labels from the Job's Pod
Dec 17 23:03:53.887: INFO: Successfully updated pod "adopt-release-jxztf"
STEP: Checking that the Job releases the Pod
Dec 17 23:03:53.887: INFO: Waiting up to 15m0s for pod "adopt-release-jxztf" in namespace "job-7437" to be "released"
Dec 17 23:03:53.907: INFO: Pod "adopt-release-jxztf": Phase="Running", Reason="", readiness=true. Elapsed: 20.070813ms
Dec 17 23:03:55.915: INFO: Pod "adopt-release-jxztf": Phase="Running", Reason="", readiness=true. Elapsed: 2.02757869s
Dec 17 23:03:55.915: INFO: Pod "adopt-release-jxztf" satisfied condition "released"
[AfterEach] [sig-apps] Job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:03:55.915: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "job-7437" for this suite.
Dec 17 23:04:58.028: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:04:58.258: INFO: namespace job-7437 deletion completed in 1m2.338271456s

• [SLOW TEST:77.561 seconds]
[sig-apps] Job
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should adopt matching orphans and release non-matching pods [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:04:58.259: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 23:04:58.371: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7b1920f7-ad98-4faf-bda9-9d524587f6e9" in namespace "downward-api-6105" to be "success or failure"
Dec 17 23:04:58.421: INFO: Pod "downwardapi-volume-7b1920f7-ad98-4faf-bda9-9d524587f6e9": Phase="Pending", Reason="", readiness=false. Elapsed: 50.222606ms
Dec 17 23:05:00.438: INFO: Pod "downwardapi-volume-7b1920f7-ad98-4faf-bda9-9d524587f6e9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066939194s
Dec 17 23:05:02.449: INFO: Pod "downwardapi-volume-7b1920f7-ad98-4faf-bda9-9d524587f6e9": Phase="Pending", Reason="", readiness=false. Elapsed: 4.07811941s
Dec 17 23:05:04.462: INFO: Pod "downwardapi-volume-7b1920f7-ad98-4faf-bda9-9d524587f6e9": Phase="Pending", Reason="", readiness=false. Elapsed: 6.090931387s
Dec 17 23:05:06.476: INFO: Pod "downwardapi-volume-7b1920f7-ad98-4faf-bda9-9d524587f6e9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.1048793s
STEP: Saw pod success
Dec 17 23:05:06.476: INFO: Pod "downwardapi-volume-7b1920f7-ad98-4faf-bda9-9d524587f6e9" satisfied condition "success or failure"
Dec 17 23:05:06.485: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-7b1920f7-ad98-4faf-bda9-9d524587f6e9 container client-container: 
STEP: delete the pod
Dec 17 23:05:06.843: INFO: Waiting for pod downwardapi-volume-7b1920f7-ad98-4faf-bda9-9d524587f6e9 to disappear
Dec 17 23:05:06.864: INFO: Pod downwardapi-volume-7b1920f7-ad98-4faf-bda9-9d524587f6e9 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:05:06.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6105" for this suite.
Dec 17 23:05:12.915: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:05:13.011: INFO: namespace downward-api-6105 deletion completed in 6.138524321s

• [SLOW TEST:14.752 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period 
  should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:05:13.012: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:47
[It] should be submitted and removed [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the pod
STEP: setting up selector
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
Dec 17 23:05:21.260: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0'
STEP: deleting the pod gracefully
STEP: verifying the kubelet observed the termination notice
Dec 17 23:05:31.468: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed
[AfterEach] [k8s.io] [sig-node] Pods Extended
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:05:31.477: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-5050" for this suite.
Dec 17 23:05:37.675: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:05:37.823: INFO: namespace pods-5050 deletion completed in 6.338520756s

• [SLOW TEST:24.811 seconds]
[k8s.io] [sig-node] Pods Extended
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  [k8s.io] Delete Grace Period
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
    should be submitted and removed [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:05:37.824: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
[It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating pod busybox-85fa4a17-2e2e-47eb-8d32-813f04ee6974 in namespace container-probe-209
Dec 17 23:05:45.939: INFO: Started pod busybox-85fa4a17-2e2e-47eb-8d32-813f04ee6974 in namespace container-probe-209
STEP: checking the pod's current state and verifying that restartCount is present
Dec 17 23:05:45.943: INFO: Initial restart count of pod busybox-85fa4a17-2e2e-47eb-8d32-813f04ee6974 is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:09:46.155: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-209" for this suite.
Dec 17 23:09:52.196: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:09:52.284: INFO: namespace container-probe-209 deletion completed in 6.119738346s

• [SLOW TEST:254.460 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[sig-storage] HostPath 
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:09:52.285: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename hostpath
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37
[It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test hostPath mode
Dec 17 23:09:52.503: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-2910" to be "success or failure"
Dec 17 23:09:52.581: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 77.457843ms
Dec 17 23:09:54.600: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096096174s
Dec 17 23:09:56.638: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.134931796s
Dec 17 23:09:58.650: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 6.146815426s
Dec 17 23:10:00.674: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 8.170181879s
Dec 17 23:10:02.683: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.179145765s
Dec 17 23:10:04.690: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 12.18639263s
STEP: Saw pod success
Dec 17 23:10:04.690: INFO: Pod "pod-host-path-test" satisfied condition "success or failure"
Dec 17 23:10:04.694: INFO: Trying to get logs from node jerma-node pod pod-host-path-test container test-container-1: 
STEP: delete the pod
Dec 17 23:10:04.840: INFO: Waiting for pod pod-host-path-test to disappear
Dec 17 23:10:04.850: INFO: Pod pod-host-path-test no longer exists
[AfterEach] [sig-storage] HostPath
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:10:04.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "hostpath-2910" for this suite.
Dec 17 23:10:10.879: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:10:10.995: INFO: namespace hostpath-2910 deletion completed in 6.13826953s

• [SLOW TEST:18.710 seconds]
[sig-storage] HostPath
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34
  should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:10:10.995: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating secret with name secret-test-e1e4851e-9518-439b-948a-56d47bce8eda
STEP: Creating a pod to test consume secrets
Dec 17 23:10:11.073: INFO: Waiting up to 5m0s for pod "pod-secrets-4f742109-169a-40dc-963d-b28f9cc78348" in namespace "secrets-6653" to be "success or failure"
Dec 17 23:10:11.088: INFO: Pod "pod-secrets-4f742109-169a-40dc-963d-b28f9cc78348": Phase="Pending", Reason="", readiness=false. Elapsed: 14.916666ms
Dec 17 23:10:13.095: INFO: Pod "pod-secrets-4f742109-169a-40dc-963d-b28f9cc78348": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022021415s
Dec 17 23:10:15.107: INFO: Pod "pod-secrets-4f742109-169a-40dc-963d-b28f9cc78348": Phase="Pending", Reason="", readiness=false. Elapsed: 4.034345003s
Dec 17 23:10:17.116: INFO: Pod "pod-secrets-4f742109-169a-40dc-963d-b28f9cc78348": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042767043s
Dec 17 23:10:19.143: INFO: Pod "pod-secrets-4f742109-169a-40dc-963d-b28f9cc78348": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.070299117s
STEP: Saw pod success
Dec 17 23:10:19.144: INFO: Pod "pod-secrets-4f742109-169a-40dc-963d-b28f9cc78348" satisfied condition "success or failure"
Dec 17 23:10:19.153: INFO: Trying to get logs from node jerma-node pod pod-secrets-4f742109-169a-40dc-963d-b28f9cc78348 container secret-volume-test: 
STEP: delete the pod
Dec 17 23:10:19.434: INFO: Waiting for pod pod-secrets-4f742109-169a-40dc-963d-b28f9cc78348 to disappear
Dec 17 23:10:19.438: INFO: Pod pod-secrets-4f742109-169a-40dc-963d-b28f9cc78348 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:10:19.438: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6653" for this suite.
Dec 17 23:10:25.580: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:10:25.729: INFO: namespace secrets-6653 deletion completed in 6.284158535s

• [SLOW TEST:14.734 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:10:25.730: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name configmap-test-volume-f107d006-0989-4635-9050-5cd75516c770
STEP: Creating a pod to test consume configMaps
Dec 17 23:10:25.842: INFO: Waiting up to 5m0s for pod "pod-configmaps-2ab881d4-c58f-43ef-b7ac-6e111b497485" in namespace "configmap-9202" to be "success or failure"
Dec 17 23:10:25.847: INFO: Pod "pod-configmaps-2ab881d4-c58f-43ef-b7ac-6e111b497485": Phase="Pending", Reason="", readiness=false. Elapsed: 5.719083ms
Dec 17 23:10:27.860: INFO: Pod "pod-configmaps-2ab881d4-c58f-43ef-b7ac-6e111b497485": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017986865s
Dec 17 23:10:29.873: INFO: Pod "pod-configmaps-2ab881d4-c58f-43ef-b7ac-6e111b497485": Phase="Pending", Reason="", readiness=false. Elapsed: 4.031530646s
Dec 17 23:10:31.937: INFO: Pod "pod-configmaps-2ab881d4-c58f-43ef-b7ac-6e111b497485": Phase="Pending", Reason="", readiness=false. Elapsed: 6.095410716s
Dec 17 23:10:33.951: INFO: Pod "pod-configmaps-2ab881d4-c58f-43ef-b7ac-6e111b497485": Phase="Pending", Reason="", readiness=false. Elapsed: 8.108892168s
Dec 17 23:10:35.965: INFO: Pod "pod-configmaps-2ab881d4-c58f-43ef-b7ac-6e111b497485": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.122972491s
STEP: Saw pod success
Dec 17 23:10:35.965: INFO: Pod "pod-configmaps-2ab881d4-c58f-43ef-b7ac-6e111b497485" satisfied condition "success or failure"
Dec 17 23:10:35.975: INFO: Trying to get logs from node jerma-node pod pod-configmaps-2ab881d4-c58f-43ef-b7ac-6e111b497485 container configmap-volume-test: 
STEP: delete the pod
Dec 17 23:10:36.307: INFO: Waiting for pod pod-configmaps-2ab881d4-c58f-43ef-b7ac-6e111b497485 to disappear
Dec 17 23:10:36.318: INFO: Pod pod-configmaps-2ab881d4-c58f-43ef-b7ac-6e111b497485 no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:10:36.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9202" for this suite.
Dec 17 23:10:42.362: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:10:42.482: INFO: namespace configmap-9202 deletion completed in 6.147435312s

• [SLOW TEST:16.753 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] 
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:10:42.483: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:126
STEP: Setting up server cert
STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication
STEP: Deploying the custom resource conversion webhook pod
STEP: Wait for the deployment to be ready
Dec 17 23:10:43.745: INFO: new replicaset for deployment "sample-crd-conversion-webhook-deployment" is yet to be created
Dec 17 23:10:45.801: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221043, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221043, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221043, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221043, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 23:10:47.831: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221043, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221043, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221043, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221043, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 23:10:49.820: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221043, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221043, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221043, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221043, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-64d485d9bb\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 17 23:10:52.902: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1
[It] should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 23:10:52.939: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating a v1 custom resource
STEP: v2 custom resource should be converted
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:10:54.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-webhook-5997" for this suite.
Dec 17 23:11:02.603: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:11:02.693: INFO: namespace crd-webhook-5997 deletion completed in 8.150774702s
[AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:137

• [SLOW TEST:20.224 seconds]
[sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should be able to convert from CR v1 to CR v2 [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:11:02.710: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 17 23:11:03.027: INFO: Waiting up to 5m0s for pod "pod-14fb86dc-1889-4c19-be17-220b36fbc9aa" in namespace "emptydir-2537" to be "success or failure"
Dec 17 23:11:03.042: INFO: Pod "pod-14fb86dc-1889-4c19-be17-220b36fbc9aa": Phase="Pending", Reason="", readiness=false. Elapsed: 13.656078ms
Dec 17 23:11:05.049: INFO: Pod "pod-14fb86dc-1889-4c19-be17-220b36fbc9aa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020721923s
Dec 17 23:11:07.068: INFO: Pod "pod-14fb86dc-1889-4c19-be17-220b36fbc9aa": Phase="Pending", Reason="", readiness=false. Elapsed: 4.039605232s
Dec 17 23:11:09.073: INFO: Pod "pod-14fb86dc-1889-4c19-be17-220b36fbc9aa": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045252904s
Dec 17 23:11:11.122: INFO: Pod "pod-14fb86dc-1889-4c19-be17-220b36fbc9aa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.093540318s
STEP: Saw pod success
Dec 17 23:11:11.122: INFO: Pod "pod-14fb86dc-1889-4c19-be17-220b36fbc9aa" satisfied condition "success or failure"
Dec 17 23:11:11.132: INFO: Trying to get logs from node jerma-node pod pod-14fb86dc-1889-4c19-be17-220b36fbc9aa container test-container: 
STEP: delete the pod
Dec 17 23:11:11.277: INFO: Waiting for pod pod-14fb86dc-1889-4c19-be17-220b36fbc9aa to disappear
Dec 17 23:11:11.285: INFO: Pod pod-14fb86dc-1889-4c19-be17-220b36fbc9aa no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:11:11.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2537" for this suite.
Dec 17 23:11:17.318: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:11:17.463: INFO: namespace emptydir-2537 deletion completed in 6.173570878s

• [SLOW TEST:14.754 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:11:17.469: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name configmap-test-upd-019f8b75-2987-4024-93b9-986cc202446a
STEP: Creating the pod
STEP: Updating configmap configmap-test-upd-019f8b75-2987-4024-93b9-986cc202446a
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:12:39.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-1305" for this suite.
Dec 17 23:12:51.156: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:12:51.417: INFO: namespace configmap-1305 deletion completed in 12.314243s

• [SLOW TEST:93.948 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
  updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:12:51.420: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating projection with secret that has name projected-secret-test-7a4d066d-f05a-4675-81f1-49bc3de639a3
STEP: Creating a pod to test consume secrets
Dec 17 23:12:51.529: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-f444819a-d881-4136-a71c-17ddeac75388" in namespace "projected-1738" to be "success or failure"
Dec 17 23:12:51.538: INFO: Pod "pod-projected-secrets-f444819a-d881-4136-a71c-17ddeac75388": Phase="Pending", Reason="", readiness=false. Elapsed: 9.340293ms
Dec 17 23:12:53.551: INFO: Pod "pod-projected-secrets-f444819a-d881-4136-a71c-17ddeac75388": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021729385s
Dec 17 23:12:55.560: INFO: Pod "pod-projected-secrets-f444819a-d881-4136-a71c-17ddeac75388": Phase="Pending", Reason="", readiness=false. Elapsed: 4.03081466s
Dec 17 23:12:57.571: INFO: Pod "pod-projected-secrets-f444819a-d881-4136-a71c-17ddeac75388": Phase="Pending", Reason="", readiness=false. Elapsed: 6.041704681s
Dec 17 23:12:59.582: INFO: Pod "pod-projected-secrets-f444819a-d881-4136-a71c-17ddeac75388": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.053185189s
STEP: Saw pod success
Dec 17 23:12:59.583: INFO: Pod "pod-projected-secrets-f444819a-d881-4136-a71c-17ddeac75388" satisfied condition "success or failure"
Dec 17 23:12:59.589: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-f444819a-d881-4136-a71c-17ddeac75388 container projected-secret-volume-test: 
STEP: delete the pod
Dec 17 23:12:59.935: INFO: Waiting for pod pod-projected-secrets-f444819a-d881-4136-a71c-17ddeac75388 to disappear
Dec 17 23:12:59.988: INFO: Pod pod-projected-secrets-f444819a-d881-4136-a71c-17ddeac75388 no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:12:59.988: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-1738" for this suite.
Dec 17 23:13:06.033: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:13:06.125: INFO: namespace projected-1738 deletion completed in 6.128833507s

• [SLOW TEST:14.706 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:13:06.126: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Service
STEP: Ensuring resource quota status captures service creation
STEP: Deleting a Service
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:13:17.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3197" for this suite.
Dec 17 23:13:23.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:13:24.037: INFO: namespace resourcequota-3197 deletion completed in 6.502437567s

• [SLOW TEST:17.912 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a service. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Proxy server 
  should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:13:24.039: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should support --unix-socket=/path  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Starting the proxy
Dec 17 23:13:24.299: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix604381198/test'
STEP: retrieving proxy /api/ output
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:13:24.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-2719" for this suite.
Dec 17 23:13:30.525: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:13:30.674: INFO: namespace kubectl-2719 deletion completed in 6.202710004s

• [SLOW TEST:6.636 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Proxy server
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1782
    should support --unix-socket=/path  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] [sig-node] Events 
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:13:30.677: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename events
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: retrieving the pod
Dec 17 23:13:39.013: INFO: &Pod{ObjectMeta:{send-events-c31f3e8d-72ad-44b2-9ff5-a26f11ff2724  events-6935 /api/v1/namespaces/events-6935/pods/send-events-c31f3e8d-72ad-44b2-9ff5-a26f11ff2724 d562826e-2be5-4ec9-b10f-2eb9865a0498 9157610 0 2019-12-17 23:13:30 +0000 UTC   map[name:foo time:966639742] map[] [] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ntgcc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ntgcc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.6,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ntgcc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:13:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:13:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:13:37 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:13:30 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.1,StartTime:2019-12-17 23:13:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-17 23:13:37 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.6,ImageID:docker-pullable://gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727,ContainerID:docker://58100ba3ad4f23ae5078c2804cd342aeb4b70390a2cfa10af86369174d9f516c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}

STEP: checking for scheduler event about the pod
Dec 17 23:13:41.023: INFO: Saw scheduler event for our pod.
STEP: checking for kubelet event about the pod
Dec 17 23:13:43.038: INFO: Saw kubelet event for our pod.
STEP: deleting the pod
[AfterEach] [k8s.io] [sig-node] Events
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:13:43.058: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "events-6935" for this suite.
Dec 17 23:14:27.127: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:14:27.264: INFO: namespace events-6935 deletion completed in 44.161165252s

• [SLOW TEST:56.587 seconds]
[k8s.io] [sig-node] Events
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should be sent by kubelets and the scheduler about pods scheduling and running  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected secret 
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:14:27.266: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating projection with secret that has name projected-secret-test-a3442b51-ca9e-436b-a2cb-24e8928b2bdf
STEP: Creating a pod to test consume secrets
Dec 17 23:14:27.341: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-c97254ab-6a28-40ee-927f-f3715d0ff56b" in namespace "projected-6306" to be "success or failure"
Dec 17 23:14:27.379: INFO: Pod "pod-projected-secrets-c97254ab-6a28-40ee-927f-f3715d0ff56b": Phase="Pending", Reason="", readiness=false. Elapsed: 37.733681ms
Dec 17 23:14:29.390: INFO: Pod "pod-projected-secrets-c97254ab-6a28-40ee-927f-f3715d0ff56b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048264008s
Dec 17 23:14:31.486: INFO: Pod "pod-projected-secrets-c97254ab-6a28-40ee-927f-f3715d0ff56b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.144142928s
Dec 17 23:14:33.501: INFO: Pod "pod-projected-secrets-c97254ab-6a28-40ee-927f-f3715d0ff56b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.158924282s
Dec 17 23:14:35.509: INFO: Pod "pod-projected-secrets-c97254ab-6a28-40ee-927f-f3715d0ff56b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.166983008s
STEP: Saw pod success
Dec 17 23:14:35.509: INFO: Pod "pod-projected-secrets-c97254ab-6a28-40ee-927f-f3715d0ff56b" satisfied condition "success or failure"
Dec 17 23:14:35.515: INFO: Trying to get logs from node jerma-node pod pod-projected-secrets-c97254ab-6a28-40ee-927f-f3715d0ff56b container projected-secret-volume-test: 
STEP: delete the pod
Dec 17 23:14:35.637: INFO: Waiting for pod pod-projected-secrets-c97254ab-6a28-40ee-927f-f3715d0ff56b to disappear
Dec 17 23:14:35.646: INFO: Pod pod-projected-secrets-c97254ab-6a28-40ee-927f-f3715d0ff56b no longer exists
[AfterEach] [sig-storage] Projected secret
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:14:35.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-6306" for this suite.
Dec 17 23:14:41.681: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:14:41.920: INFO: namespace projected-6306 deletion completed in 6.267482385s

• [SLOW TEST:14.655 seconds]
[sig-storage] Projected secret
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:35
  should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:14:41.922: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 17 23:14:42.825: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 17 23:14:44.854: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221282, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221282, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221282, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221282, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 23:14:46.869: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221282, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221282, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221282, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221282, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 23:14:48.868: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221282, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221282, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221282, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712221282, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 17 23:14:51.971: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Registering the mutating pod webhook via the AdmissionRegistration API
STEP: create a pod that should be updated by the webhook
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:14:52.176: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-7873" for this suite.
Dec 17 23:15:04.252: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:15:04.375: INFO: namespace webhook-7873 deletion completed in 12.167415704s
STEP: Destroying namespace "webhook-7873-markers" for this suite.
Dec 17 23:15:10.465: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:15:10.557: INFO: namespace webhook-7873-markers deletion completed in 6.181108026s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:28.648 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should mutate pod and apply defaults after mutation [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:15:10.571: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0777 on tmpfs
Dec 17 23:15:10.686: INFO: Waiting up to 5m0s for pod "pod-3c54ff16-fbd8-43a2-bafa-2c13225497ed" in namespace "emptydir-3835" to be "success or failure"
Dec 17 23:15:10.716: INFO: Pod "pod-3c54ff16-fbd8-43a2-bafa-2c13225497ed": Phase="Pending", Reason="", readiness=false. Elapsed: 29.57656ms
Dec 17 23:15:12.732: INFO: Pod "pod-3c54ff16-fbd8-43a2-bafa-2c13225497ed": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045580258s
Dec 17 23:15:14.828: INFO: Pod "pod-3c54ff16-fbd8-43a2-bafa-2c13225497ed": Phase="Pending", Reason="", readiness=false. Elapsed: 4.141780828s
Dec 17 23:15:16.834: INFO: Pod "pod-3c54ff16-fbd8-43a2-bafa-2c13225497ed": Phase="Pending", Reason="", readiness=false. Elapsed: 6.147847681s
Dec 17 23:15:18.841: INFO: Pod "pod-3c54ff16-fbd8-43a2-bafa-2c13225497ed": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.154832573s
STEP: Saw pod success
Dec 17 23:15:18.842: INFO: Pod "pod-3c54ff16-fbd8-43a2-bafa-2c13225497ed" satisfied condition "success or failure"
Dec 17 23:15:18.849: INFO: Trying to get logs from node jerma-node pod pod-3c54ff16-fbd8-43a2-bafa-2c13225497ed container test-container: 
STEP: delete the pod
Dec 17 23:15:18.940: INFO: Waiting for pod pod-3c54ff16-fbd8-43a2-bafa-2c13225497ed to disappear
Dec 17 23:15:18.952: INFO: Pod pod-3c54ff16-fbd8-43a2-bafa-2c13225497ed no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:15:18.952: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-3835" for this suite.
Dec 17 23:15:24.984: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:15:25.094: INFO: namespace emptydir-3835 deletion completed in 6.137853759s

• [SLOW TEST:14.524 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:15:25.095: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
Dec 17 23:15:25.188: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 17 23:15:25.321: INFO: Waiting for terminating namespaces to be deleted...
Dec 17 23:15:25.328: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Dec 17 23:15:25.361: INFO: weave-net-srfjj from kube-system started at 2019-12-17 21:23:16 +0000 UTC (2 container statuses recorded)
Dec 17 23:15:25.361: INFO: 	Container weave ready: true, restart count 0
Dec 17 23:15:25.361: INFO: 	Container weave-npc ready: true, restart count 0
Dec 17 23:15:25.361: INFO: kube-proxy-jcjl4 from kube-system started at 2019-10-12 13:47:49 +0000 UTC (1 container statuses recorded)
Dec 17 23:15:25.361: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 17 23:15:25.361: INFO: 
Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit before test
Dec 17 23:15:25.392: INFO: coredns-5644d7b6d9-rqwzj from kube-system started at 2019-11-10 18:03:38 +0000 UTC (0 container statuses recorded)
Dec 17 23:15:25.393: INFO: weave-net-gsjjk from kube-system started at 2019-12-13 09:16:56 +0000 UTC (2 container statuses recorded)
Dec 17 23:15:25.393: INFO: 	Container weave ready: true, restart count 0
Dec 17 23:15:25.393: INFO: 	Container weave-npc ready: true, restart count 0
Dec 17 23:15:25.393: INFO: coredns-5644d7b6d9-9sj58 from kube-system started at 2019-12-14 15:12:12 +0000 UTC (1 container statuses recorded)
Dec 17 23:15:25.393: INFO: 	Container coredns ready: true, restart count 0
Dec 17 23:15:25.393: INFO: kube-scheduler-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:42 +0000 UTC (1 container statuses recorded)
Dec 17 23:15:25.393: INFO: 	Container kube-scheduler ready: true, restart count 11
Dec 17 23:15:25.393: INFO: kube-proxy-bdcvr from kube-system started at 2019-12-13 09:08:20 +0000 UTC (1 container statuses recorded)
Dec 17 23:15:25.393: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 17 23:15:25.393: INFO: coredns-5644d7b6d9-xvlxj from kube-system started at 2019-12-14 16:49:52 +0000 UTC (1 container statuses recorded)
Dec 17 23:15:25.393: INFO: 	Container coredns ready: true, restart count 0
Dec 17 23:15:25.393: INFO: etcd-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:37 +0000 UTC (1 container statuses recorded)
Dec 17 23:15:25.393: INFO: 	Container etcd ready: true, restart count 1
Dec 17 23:15:25.393: INFO: kube-controller-manager-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:40 +0000 UTC (1 container statuses recorded)
Dec 17 23:15:25.393: INFO: 	Container kube-controller-manager ready: true, restart count 8
Dec 17 23:15:25.393: INFO: kube-apiserver-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:38 +0000 UTC (1 container statuses recorded)
Dec 17 23:15:25.393: INFO: 	Container kube-apiserver ready: true, restart count 1
Dec 17 23:15:25.393: INFO: coredns-5644d7b6d9-n9kkw from kube-system started at 2019-11-10 16:39:08 +0000 UTC (0 container statuses recorded)
[It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Trying to launch a pod without a label to get a node which can launch it.
STEP: Explicitly delete pod here to free the resource it takes.
STEP: Trying to apply a random label on the found node.
STEP: verifying the node has the label kubernetes.io/e2e-3d0384ae-a9ab-4e30-b5de-3f8444f7180a 95
STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled
STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled
STEP: removing the label kubernetes.io/e2e-3d0384ae-a9ab-4e30-b5de-3f8444f7180a off the node jerma-node
STEP: verifying the node doesn't have the label kubernetes.io/e2e-3d0384ae-a9ab-4e30-b5de-3f8444f7180a
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:20:43.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-858" for this suite.
Dec 17 23:20:57.970: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:20:58.110: INFO: namespace sched-pred-858 deletion completed in 14.246221392s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78

• [SLOW TEST:333.015 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:20:58.112: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 23:20:58.189: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27a6d357-4bc2-4f6d-8c97-ecd5dc33044f" in namespace "downward-api-6683" to be "success or failure"
Dec 17 23:20:58.206: INFO: Pod "downwardapi-volume-27a6d357-4bc2-4f6d-8c97-ecd5dc33044f": Phase="Pending", Reason="", readiness=false. Elapsed: 16.767669ms
Dec 17 23:21:00.216: INFO: Pod "downwardapi-volume-27a6d357-4bc2-4f6d-8c97-ecd5dc33044f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027307197s
Dec 17 23:21:02.315: INFO: Pod "downwardapi-volume-27a6d357-4bc2-4f6d-8c97-ecd5dc33044f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.125650447s
Dec 17 23:21:04.324: INFO: Pod "downwardapi-volume-27a6d357-4bc2-4f6d-8c97-ecd5dc33044f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.135335176s
Dec 17 23:21:06.333: INFO: Pod "downwardapi-volume-27a6d357-4bc2-4f6d-8c97-ecd5dc33044f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.144297357s
STEP: Saw pod success
Dec 17 23:21:06.334: INFO: Pod "downwardapi-volume-27a6d357-4bc2-4f6d-8c97-ecd5dc33044f" satisfied condition "success or failure"
Dec 17 23:21:06.336: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-27a6d357-4bc2-4f6d-8c97-ecd5dc33044f container client-container: 
STEP: delete the pod
Dec 17 23:21:06.400: INFO: Waiting for pod downwardapi-volume-27a6d357-4bc2-4f6d-8c97-ecd5dc33044f to disappear
Dec 17 23:21:06.405: INFO: Pod downwardapi-volume-27a6d357-4bc2-4f6d-8c97-ecd5dc33044f no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:21:06.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-6683" for this suite.
Dec 17 23:21:12.552: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:21:12.719: INFO: namespace downward-api-6683 deletion completed in 6.308254049s

• [SLOW TEST:14.608 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:21:12.720: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a Pod that fits quota
STEP: Ensuring ResourceQuota status captures the pod usage
STEP: Not allowing a pod to be created that exceeds remaining quota
STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources)
STEP: Ensuring a pod cannot update its resource requirements
STEP: Ensuring attempts to update pod resource requirements did not change quota usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:21:26.149: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5148" for this suite.
Dec 17 23:21:32.224: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:21:32.385: INFO: namespace resourcequota-5148 deletion completed in 6.228174308s

• [SLOW TEST:19.665 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a pod. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command that always fails in a pod 
  should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:21:32.387: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[BeforeEach] when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81
[It] should have an terminated reason [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:21:40.508: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-4559" for this suite.
Dec 17 23:21:46.597: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:21:46.766: INFO: namespace kubelet-test-4559 deletion completed in 6.249301912s

• [SLOW TEST:14.378 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  when scheduling a busybox command that always fails in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:78
    should have an terminated reason [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:21:46.766: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 17 23:21:46.890: INFO: Waiting up to 5m0s for pod "pod-a6c3f424-33f6-41e0-b27b-bc4625659c3d" in namespace "emptydir-2607" to be "success or failure"
Dec 17 23:21:46.913: INFO: Pod "pod-a6c3f424-33f6-41e0-b27b-bc4625659c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 22.00906ms
Dec 17 23:21:48.925: INFO: Pod "pod-a6c3f424-33f6-41e0-b27b-bc4625659c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034628945s
Dec 17 23:21:50.935: INFO: Pod "pod-a6c3f424-33f6-41e0-b27b-bc4625659c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.044440551s
Dec 17 23:21:52.947: INFO: Pod "pod-a6c3f424-33f6-41e0-b27b-bc4625659c3d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.056074775s
Dec 17 23:21:54.958: INFO: Pod "pod-a6c3f424-33f6-41e0-b27b-bc4625659c3d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.067624196s
STEP: Saw pod success
Dec 17 23:21:54.959: INFO: Pod "pod-a6c3f424-33f6-41e0-b27b-bc4625659c3d" satisfied condition "success or failure"
Dec 17 23:21:54.963: INFO: Trying to get logs from node jerma-node pod pod-a6c3f424-33f6-41e0-b27b-bc4625659c3d container test-container: 
STEP: delete the pod
Dec 17 23:21:55.042: INFO: Waiting for pod pod-a6c3f424-33f6-41e0-b27b-bc4625659c3d to disappear
Dec 17 23:21:55.050: INFO: Pod pod-a6c3f424-33f6-41e0-b27b-bc4625659c3d no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:21:55.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-2607" for this suite.
Dec 17 23:22:01.115: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:22:01.251: INFO: namespace emptydir-2607 deletion completed in 6.194780529s

• [SLOW TEST:14.485 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:22:01.251: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 23:22:01.443: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0333fbe2-34c0-4b5f-b794-53ecd35a47ca" in namespace "downward-api-9481" to be "success or failure"
Dec 17 23:22:01.524: INFO: Pod "downwardapi-volume-0333fbe2-34c0-4b5f-b794-53ecd35a47ca": Phase="Pending", Reason="", readiness=false. Elapsed: 80.130971ms
Dec 17 23:22:03.537: INFO: Pod "downwardapi-volume-0333fbe2-34c0-4b5f-b794-53ecd35a47ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.093581432s
Dec 17 23:22:05.556: INFO: Pod "downwardapi-volume-0333fbe2-34c0-4b5f-b794-53ecd35a47ca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.112258445s
Dec 17 23:22:07.571: INFO: Pod "downwardapi-volume-0333fbe2-34c0-4b5f-b794-53ecd35a47ca": Phase="Pending", Reason="", readiness=false. Elapsed: 6.128049608s
Dec 17 23:22:09.585: INFO: Pod "downwardapi-volume-0333fbe2-34c0-4b5f-b794-53ecd35a47ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.141974852s
STEP: Saw pod success
Dec 17 23:22:09.586: INFO: Pod "downwardapi-volume-0333fbe2-34c0-4b5f-b794-53ecd35a47ca" satisfied condition "success or failure"
Dec 17 23:22:09.589: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-0333fbe2-34c0-4b5f-b794-53ecd35a47ca container client-container: 
STEP: delete the pod
Dec 17 23:22:09.756: INFO: Waiting for pod downwardapi-volume-0333fbe2-34c0-4b5f-b794-53ecd35a47ca to disappear
Dec 17 23:22:09.777: INFO: Pod downwardapi-volume-0333fbe2-34c0-4b5f-b794-53ecd35a47ca no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:22:09.777: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9481" for this suite.
Dec 17 23:22:15.818: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:22:15.996: INFO: namespace downward-api-9481 deletion completed in 6.209272243s

• [SLOW TEST:14.744 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:22:15.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the pod
Dec 17 23:22:16.064: INFO: PodSpec: initContainers in spec.initContainers
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:22:28.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-1203" for this suite.
Dec 17 23:22:36.478: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:22:36.785: INFO: namespace init-container-1203 deletion completed in 8.39195506s

• [SLOW TEST:20.788 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSS
------------------------------
[sig-apps] ReplicaSet 
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:22:36.789: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replicaset
STEP: Waiting for a default service account to be provisioned in namespace
[It] should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 23:22:36.905: INFO: Creating ReplicaSet my-hostname-basic-8eaece42-8c4d-4666-b2bf-1468a2fef96d
Dec 17 23:22:36.939: INFO: Pod name my-hostname-basic-8eaece42-8c4d-4666-b2bf-1468a2fef96d: Found 1 pods out of 1
Dec 17 23:22:36.939: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-8eaece42-8c4d-4666-b2bf-1468a2fef96d" is running
Dec 17 23:22:44.969: INFO: Pod "my-hostname-basic-8eaece42-8c4d-4666-b2bf-1468a2fef96d-bftwk" is running (conditions: [])
Dec 17 23:22:44.970: INFO: Trying to dial the pod
Dec 17 23:22:49.995: INFO: Controller my-hostname-basic-8eaece42-8c4d-4666-b2bf-1468a2fef96d: Got expected result from replica 1 [my-hostname-basic-8eaece42-8c4d-4666-b2bf-1468a2fef96d-bftwk]: "my-hostname-basic-8eaece42-8c4d-4666-b2bf-1468a2fef96d-bftwk", 1 of 1 required successes so far
[AfterEach] [sig-apps] ReplicaSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:22:49.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replicaset-6351" for this suite.
Dec 17 23:22:56.021: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:22:56.118: INFO: namespace replicaset-6351 deletion completed in 6.115722223s

• [SLOW TEST:19.329 seconds]
[sig-apps] ReplicaSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should serve a basic image on each replica with a public image  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[k8s.io] Pods 
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:22:56.119: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
[It] should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 23:23:06.374: INFO: Waiting up to 5m0s for pod "client-envvars-4282ccb6-f4ad-450a-a503-e0ffc2c3436c" in namespace "pods-6698" to be "success or failure"
Dec 17 23:23:06.383: INFO: Pod "client-envvars-4282ccb6-f4ad-450a-a503-e0ffc2c3436c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.683768ms
Dec 17 23:23:08.403: INFO: Pod "client-envvars-4282ccb6-f4ad-450a-a503-e0ffc2c3436c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028680261s
Dec 17 23:23:10.412: INFO: Pod "client-envvars-4282ccb6-f4ad-450a-a503-e0ffc2c3436c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.037183107s
Dec 17 23:23:12.421: INFO: Pod "client-envvars-4282ccb6-f4ad-450a-a503-e0ffc2c3436c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.046278768s
Dec 17 23:23:14.436: INFO: Pod "client-envvars-4282ccb6-f4ad-450a-a503-e0ffc2c3436c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061165435s
STEP: Saw pod success
Dec 17 23:23:14.436: INFO: Pod "client-envvars-4282ccb6-f4ad-450a-a503-e0ffc2c3436c" satisfied condition "success or failure"
Dec 17 23:23:14.444: INFO: Trying to get logs from node jerma-node pod client-envvars-4282ccb6-f4ad-450a-a503-e0ffc2c3436c container env3cont: 
STEP: delete the pod
Dec 17 23:23:14.580: INFO: Waiting for pod client-envvars-4282ccb6-f4ad-450a-a503-e0ffc2c3436c to disappear
Dec 17 23:23:14.609: INFO: Pod client-envvars-4282ccb6-f4ad-450a-a503-e0ffc2c3436c no longer exists
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:23:14.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-6698" for this suite.
Dec 17 23:23:48.668: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:23:48.829: INFO: namespace pods-6698 deletion completed in 34.208737077s

• [SLOW TEST:52.711 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should contain environment variables for services [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSS
------------------------------
[k8s.io] Probing container 
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:23:48.830: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
[It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating pod busybox-2f1c663e-8263-4554-a05c-cc41e54f2a69 in namespace container-probe-1559
Dec 17 23:23:56.920: INFO: Started pod busybox-2f1c663e-8263-4554-a05c-cc41e54f2a69 in namespace container-probe-1559
STEP: checking the pod's current state and verifying that restartCount is present
Dec 17 23:23:56.924: INFO: Initial restart count of pod busybox-2f1c663e-8263-4554-a05c-cc41e54f2a69 is 0
Dec 17 23:24:51.937: INFO: Restart count of pod container-probe-1559/busybox-2f1c663e-8263-4554-a05c-cc41e54f2a69 is now 1 (55.013543386s elapsed)
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:24:52.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-1559" for this suite.
Dec 17 23:24:58.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:24:58.311: INFO: namespace container-probe-1559 deletion completed in 6.21099126s

• [SLOW TEST:69.482 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:24:58.313: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77
STEP: Creating service test in namespace statefulset-3103
[It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Initializing watcher for selector baz=blah,foo=bar
STEP: Creating stateful set ss in namespace statefulset-3103
STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-3103
Dec 17 23:24:58.485: INFO: Found 0 stateful pods, waiting for 1
Dec 17 23:25:08.503: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod
Dec 17 23:25:08.511: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3103 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 17 23:25:10.907: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Dec 17 23:25:10.907: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 17 23:25:10.907: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 17 23:25:10.918: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true
Dec 17 23:25:20.933: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 17 23:25:20.933: INFO: Waiting for statefulset status.replicas updated to 0
Dec 17 23:25:20.968: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999998903s
Dec 17 23:25:21.989: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.985112626s
Dec 17 23:25:22.998: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.96457168s
Dec 17 23:25:24.015: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.955467647s
Dec 17 23:25:25.027: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.937920302s
Dec 17 23:25:26.036: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.926457694s
Dec 17 23:25:27.045: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.917485357s
Dec 17 23:25:28.054: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.907936533s
Dec 17 23:25:29.062: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.899105767s
Dec 17 23:25:30.071: INFO: Verifying statefulset ss doesn't scale past 1 for another 891.019386ms
STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-3103
Dec 17 23:25:31.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3103 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 23:25:31.548: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Dec 17 23:25:31.548: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 17 23:25:31.548: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 17 23:25:31.557: INFO: Found 1 stateful pods, waiting for 3
Dec 17 23:25:41.570: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 23:25:41.570: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 23:25:41.570: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Pending - Ready=false
Dec 17 23:25:51.567: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 23:25:51.567: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true
Dec 17 23:25:51.567: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true
STEP: Verifying that stateful set ss was scaled up in order
STEP: Scale down will halt with unhealthy stateful pod
Dec 17 23:25:51.581: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3103 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 17 23:25:52.056: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Dec 17 23:25:52.057: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 17 23:25:52.057: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 17 23:25:52.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3103 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 17 23:25:52.629: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Dec 17 23:25:52.630: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 17 23:25:52.630: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 17 23:25:52.630: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3103 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true'
Dec 17 23:25:53.039: INFO: stderr: "+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\n"
Dec 17 23:25:53.039: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n"
Dec 17 23:25:53.039: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'

Dec 17 23:25:53.040: INFO: Waiting for statefulset status.replicas updated to 0
Dec 17 23:25:53.048: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2
Dec 17 23:26:03.093: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false
Dec 17 23:26:03.093: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false
Dec 17 23:26:03.093: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false
Dec 17 23:26:03.140: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999642s
Dec 17 23:26:04.149: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.96154667s
Dec 17 23:26:05.164: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.951739868s
Dec 17 23:26:06.173: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.937328538s
Dec 17 23:26:07.183: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.927680326s
Dec 17 23:26:08.195: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.917648929s
Dec 17 23:26:09.208: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.906057559s
Dec 17 23:26:10.221: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.89331853s
Dec 17 23:26:11.232: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.880482049s
Dec 17 23:26:12.241: INFO: Verifying statefulset ss doesn't scale past 3 for another 868.514632ms
STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-3103
Dec 17 23:26:13.261: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3103 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 23:26:13.740: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Dec 17 23:26:13.741: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 17 23:26:13.741: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 17 23:26:13.742: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3103 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 23:26:14.133: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Dec 17 23:26:14.133: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 17 23:26:14.133: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 17 23:26:14.133: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-3103 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true'
Dec 17 23:26:14.496: INFO: stderr: "+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\n"
Dec 17 23:26:14.497: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n"
Dec 17 23:26:14.497: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'

Dec 17 23:26:14.497: INFO: Scaling statefulset ss to 0
STEP: Verifying that stateful set ss was scaled down in reverse order
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
Dec 17 23:26:34.571: INFO: Deleting all statefulset in ns statefulset-3103
Dec 17 23:26:34.636: INFO: Scaling statefulset ss to 0
Dec 17 23:26:34.657: INFO: Waiting for statefulset status.replicas updated to 0
Dec 17 23:26:34.660: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:26:34.680: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-3103" for this suite.
Dec 17 23:26:40.716: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:26:40.897: INFO: namespace statefulset-3103 deletion completed in 6.211426036s

• [SLOW TEST:102.584 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
    Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:26:40.899: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name cm-test-opt-del-6debecc3-f717-4440-9b5c-ae7ac6db181a
STEP: Creating configMap with name cm-test-opt-upd-bdc7e119-53c5-4bb4-aa76-94d7a70ff8a9
STEP: Creating the pod
STEP: Deleting configmap cm-test-opt-del-6debecc3-f717-4440-9b5c-ae7ac6db181a
STEP: Updating configmap cm-test-opt-upd-bdc7e119-53c5-4bb4-aa76-94d7a70ff8a9
STEP: Creating configMap with name cm-test-opt-create-ba79c00f-eff6-4f43-888b-b270de7d71c6
STEP: waiting to observe update in volume
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:28:04.733: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-3073" for this suite.
Dec 17 23:28:16.769: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:28:16.932: INFO: namespace projected-3073 deletion completed in 12.191409746s

• [SLOW TEST:96.033 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  optional updates should be reflected in volume [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSS
------------------------------
[sig-scheduling] SchedulerPredicates [Serial] 
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:28:16.933: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename sched-pred
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:87
Dec 17 23:28:17.098: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready
Dec 17 23:28:17.115: INFO: Waiting for terminating namespaces to be deleted...
Dec 17 23:28:17.120: INFO: 
Logging pods the kubelet thinks is on node jerma-node before test
Dec 17 23:28:17.159: INFO: weave-net-srfjj from kube-system started at 2019-12-17 21:23:16 +0000 UTC (2 container statuses recorded)
Dec 17 23:28:17.159: INFO: 	Container weave ready: true, restart count 0
Dec 17 23:28:17.159: INFO: 	Container weave-npc ready: true, restart count 0
Dec 17 23:28:17.159: INFO: kube-proxy-jcjl4 from kube-system started at 2019-10-12 13:47:49 +0000 UTC (1 container statuses recorded)
Dec 17 23:28:17.159: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 17 23:28:17.159: INFO: 
Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit before test
Dec 17 23:28:17.178: INFO: coredns-5644d7b6d9-xvlxj from kube-system started at 2019-12-14 16:49:52 +0000 UTC (1 container statuses recorded)
Dec 17 23:28:17.178: INFO: 	Container coredns ready: true, restart count 0
Dec 17 23:28:17.178: INFO: etcd-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:37 +0000 UTC (1 container statuses recorded)
Dec 17 23:28:17.178: INFO: 	Container etcd ready: true, restart count 1
Dec 17 23:28:17.178: INFO: kube-controller-manager-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:40 +0000 UTC (1 container statuses recorded)
Dec 17 23:28:17.178: INFO: 	Container kube-controller-manager ready: true, restart count 8
Dec 17 23:28:17.178: INFO: coredns-5644d7b6d9-n9kkw from kube-system started at 2019-11-10 16:39:08 +0000 UTC (0 container statuses recorded)
Dec 17 23:28:17.178: INFO: kube-apiserver-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:38 +0000 UTC (1 container statuses recorded)
Dec 17 23:28:17.178: INFO: 	Container kube-apiserver ready: true, restart count 1
Dec 17 23:28:17.178: INFO: weave-net-gsjjk from kube-system started at 2019-12-13 09:16:56 +0000 UTC (2 container statuses recorded)
Dec 17 23:28:17.178: INFO: 	Container weave ready: true, restart count 0
Dec 17 23:28:17.178: INFO: 	Container weave-npc ready: true, restart count 0
Dec 17 23:28:17.178: INFO: coredns-5644d7b6d9-rqwzj from kube-system started at 2019-11-10 18:03:38 +0000 UTC (0 container statuses recorded)
Dec 17 23:28:17.178: INFO: coredns-5644d7b6d9-9sj58 from kube-system started at 2019-12-14 15:12:12 +0000 UTC (1 container statuses recorded)
Dec 17 23:28:17.178: INFO: 	Container coredns ready: true, restart count 0
Dec 17 23:28:17.178: INFO: kube-proxy-bdcvr from kube-system started at 2019-12-13 09:08:20 +0000 UTC (1 container statuses recorded)
Dec 17 23:28:17.178: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 17 23:28:17.178: INFO: kube-scheduler-jerma-server-4b75xjbddvit from kube-system started at 2019-10-12 13:28:42 +0000 UTC (1 container statuses recorded)
Dec 17 23:28:17.178: INFO: 	Container kube-scheduler ready: true, restart count 11
[It] validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Trying to schedule Pod with nonempty NodeSelector.
STEP: Considering event: 
Type = [Warning], Name = [restricted-pod.15e14c5abf564ccd], Reason = [FailedScheduling], Message = [0/2 nodes are available: 2 node(s) didn't match node selector.]
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:28:18.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "sched-pred-7441" for this suite.
Dec 17 23:28:24.238: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:28:24.362: INFO: namespace sched-pred-7441 deletion completed in 6.147402964s
[AfterEach] [sig-scheduling] SchedulerPredicates [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:78

• [SLOW TEST:7.429 seconds]
[sig-scheduling] SchedulerPredicates [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:23
  validates that NodeSelector is respected if not matching  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSS
------------------------------
[sig-apps] Daemon set [Serial] 
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:28:24.363: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename daemonsets
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:102
[It] should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating simple DaemonSet "daemon-set"
STEP: Check that daemon pods launch on every node of the cluster.
Dec 17 23:28:24.648: INFO: Number of nodes with available pods: 0
Dec 17 23:28:24.648: INFO: Node jerma-node is running more than one daemon pod
Dec 17 23:28:25.681: INFO: Number of nodes with available pods: 0
Dec 17 23:28:25.681: INFO: Node jerma-node is running more than one daemon pod
Dec 17 23:28:27.839: INFO: Number of nodes with available pods: 0
Dec 17 23:28:27.839: INFO: Node jerma-node is running more than one daemon pod
Dec 17 23:28:28.661: INFO: Number of nodes with available pods: 0
Dec 17 23:28:28.661: INFO: Node jerma-node is running more than one daemon pod
Dec 17 23:28:29.673: INFO: Number of nodes with available pods: 0
Dec 17 23:28:29.673: INFO: Node jerma-node is running more than one daemon pod
Dec 17 23:28:31.251: INFO: Number of nodes with available pods: 0
Dec 17 23:28:31.252: INFO: Node jerma-node is running more than one daemon pod
Dec 17 23:28:31.701: INFO: Number of nodes with available pods: 0
Dec 17 23:28:31.702: INFO: Node jerma-node is running more than one daemon pod
Dec 17 23:28:32.662: INFO: Number of nodes with available pods: 0
Dec 17 23:28:32.662: INFO: Node jerma-node is running more than one daemon pod
Dec 17 23:28:35.244: INFO: Number of nodes with available pods: 0
Dec 17 23:28:35.244: INFO: Node jerma-node is running more than one daemon pod
Dec 17 23:28:35.754: INFO: Number of nodes with available pods: 0
Dec 17 23:28:35.754: INFO: Node jerma-node is running more than one daemon pod
Dec 17 23:28:36.671: INFO: Number of nodes with available pods: 0
Dec 17 23:28:36.672: INFO: Node jerma-node is running more than one daemon pod
Dec 17 23:28:37.672: INFO: Number of nodes with available pods: 1
Dec 17 23:28:37.672: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:38.674: INFO: Number of nodes with available pods: 2
Dec 17 23:28:38.675: INFO: Number of running nodes: 2, number of available pods: 2
STEP: Stop a daemon pod, check that the daemon pod is revived.
Dec 17 23:28:38.703: INFO: Number of nodes with available pods: 1
Dec 17 23:28:38.703: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:39.737: INFO: Number of nodes with available pods: 1
Dec 17 23:28:39.737: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:40.732: INFO: Number of nodes with available pods: 1
Dec 17 23:28:40.732: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:41.722: INFO: Number of nodes with available pods: 1
Dec 17 23:28:41.723: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:42.722: INFO: Number of nodes with available pods: 1
Dec 17 23:28:42.723: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:43.729: INFO: Number of nodes with available pods: 1
Dec 17 23:28:43.729: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:44.720: INFO: Number of nodes with available pods: 1
Dec 17 23:28:44.720: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:45.721: INFO: Number of nodes with available pods: 1
Dec 17 23:28:45.721: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:46.725: INFO: Number of nodes with available pods: 1
Dec 17 23:28:46.726: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:47.724: INFO: Number of nodes with available pods: 1
Dec 17 23:28:47.724: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:48.723: INFO: Number of nodes with available pods: 1
Dec 17 23:28:48.724: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:49.722: INFO: Number of nodes with available pods: 1
Dec 17 23:28:49.723: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:50.721: INFO: Number of nodes with available pods: 1
Dec 17 23:28:50.722: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:51.717: INFO: Number of nodes with available pods: 1
Dec 17 23:28:51.717: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:52.717: INFO: Number of nodes with available pods: 1
Dec 17 23:28:52.717: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:53.722: INFO: Number of nodes with available pods: 1
Dec 17 23:28:53.722: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:54.728: INFO: Number of nodes with available pods: 1
Dec 17 23:28:54.728: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:55.731: INFO: Number of nodes with available pods: 1
Dec 17 23:28:55.731: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:56.828: INFO: Number of nodes with available pods: 1
Dec 17 23:28:56.828: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:58.218: INFO: Number of nodes with available pods: 1
Dec 17 23:28:58.218: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:58.718: INFO: Number of nodes with available pods: 1
Dec 17 23:28:58.718: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:28:59.796: INFO: Number of nodes with available pods: 1
Dec 17 23:28:59.796: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:29:00.716: INFO: Number of nodes with available pods: 1
Dec 17 23:29:00.717: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:29:01.782: INFO: Number of nodes with available pods: 1
Dec 17 23:29:01.782: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:29:02.725: INFO: Number of nodes with available pods: 1
Dec 17 23:29:02.725: INFO: Node jerma-server-4b75xjbddvit is running more than one daemon pod
Dec 17 23:29:03.733: INFO: Number of nodes with available pods: 2
Dec 17 23:29:03.734: INFO: Number of running nodes: 2, number of available pods: 2
[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:68
STEP: Deleting DaemonSet "daemon-set"
STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2811, will wait for the garbage collector to delete the pods
Dec 17 23:29:03.806: INFO: Deleting DaemonSet.extensions daemon-set took: 14.158922ms
Dec 17 23:29:04.108: INFO: Terminating DaemonSet.extensions daemon-set pods took: 301.207581ms
Dec 17 23:29:16.722: INFO: Number of nodes with available pods: 0
Dec 17 23:29:16.722: INFO: Number of running nodes: 0, number of available pods: 0
Dec 17 23:29:16.726: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2811/daemonsets","resourceVersion":"9159847"},"items":null}

Dec 17 23:29:16.730: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2811/pods","resourceVersion":"9159847"},"items":null}

[AfterEach] [sig-apps] Daemon set [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:29:16.745: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "daemonsets-2811" for this suite.
Dec 17 23:29:22.811: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:29:22.968: INFO: namespace daemonsets-2811 deletion completed in 6.203443871s

• [SLOW TEST:58.606 seconds]
[sig-apps] Daemon set [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should run and stop simple daemon [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:29:22.970: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating secret with name secret-test-025f31ac-9802-464a-ac32-8d746a0f2b41
STEP: Creating a pod to test consume secrets
Dec 17 23:29:23.062: INFO: Waiting up to 5m0s for pod "pod-secrets-2af92f13-1c71-4921-8700-2861a38ec250" in namespace "secrets-3573" to be "success or failure"
Dec 17 23:29:23.071: INFO: Pod "pod-secrets-2af92f13-1c71-4921-8700-2861a38ec250": Phase="Pending", Reason="", readiness=false. Elapsed: 8.247244ms
Dec 17 23:29:25.084: INFO: Pod "pod-secrets-2af92f13-1c71-4921-8700-2861a38ec250": Phase="Pending", Reason="", readiness=false. Elapsed: 2.02155807s
Dec 17 23:29:27.096: INFO: Pod "pod-secrets-2af92f13-1c71-4921-8700-2861a38ec250": Phase="Pending", Reason="", readiness=false. Elapsed: 4.033362838s
Dec 17 23:29:29.108: INFO: Pod "pod-secrets-2af92f13-1c71-4921-8700-2861a38ec250": Phase="Pending", Reason="", readiness=false. Elapsed: 6.045670813s
Dec 17 23:29:31.118: INFO: Pod "pod-secrets-2af92f13-1c71-4921-8700-2861a38ec250": Phase="Pending", Reason="", readiness=false. Elapsed: 8.05557266s
Dec 17 23:29:33.130: INFO: Pod "pod-secrets-2af92f13-1c71-4921-8700-2861a38ec250": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.06751312s
STEP: Saw pod success
Dec 17 23:29:33.130: INFO: Pod "pod-secrets-2af92f13-1c71-4921-8700-2861a38ec250" satisfied condition "success or failure"
Dec 17 23:29:33.134: INFO: Trying to get logs from node jerma-node pod pod-secrets-2af92f13-1c71-4921-8700-2861a38ec250 container secret-env-test: 
STEP: delete the pod
Dec 17 23:29:33.203: INFO: Waiting for pod pod-secrets-2af92f13-1c71-4921-8700-2861a38ec250 to disappear
Dec 17 23:29:33.208: INFO: Pod pod-secrets-2af92f13-1c71-4921-8700-2861a38ec250 no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:29:33.209: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3573" for this suite.
Dec 17 23:29:39.247: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:29:39.371: INFO: namespace secrets-3573 deletion completed in 6.153015458s

• [SLOW TEST:16.402 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable from pods in env vars [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[sig-storage] Projected downwardAPI 
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:29:39.372: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating the pod
Dec 17 23:29:48.966: INFO: Successfully updated pod "labelsupdateb0653710-534a-485b-868f-e6f89b783e49"
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:29:51.002: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-7179" for this suite.
Dec 17 23:30:03.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:30:03.176: INFO: namespace projected-7179 deletion completed in 12.167898059s

• [SLOW TEST:23.805 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should update labels on modification [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:30:03.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
[It] should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 17 23:30:13.921: INFO: Successfully updated pod "pod-update-572083a9-e0e1-48cc-a287-d182e4c96893"
STEP: verifying the updated pod is in kubernetes
Dec 17 23:30:14.031: INFO: Pod update OK
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:30:14.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-3393" for this suite.
Dec 17 23:30:42.245: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:30:42.357: INFO: namespace pods-3393 deletion completed in 28.309414408s

• [SLOW TEST:39.176 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox command in a pod 
  should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:30:42.359: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should print the output to logs [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:30:50.570: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-2459" for this suite.
Dec 17 23:31:34.639: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:31:34.799: INFO: namespace kubelet-test-2459 deletion completed in 44.217423818s

• [SLOW TEST:52.440 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  when scheduling a busybox command in a pod
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:40
    should print the output to logs [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:31:34.800: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
[It] deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 23:31:34.859: INFO: Creating deployment "webserver-deployment"
Dec 17 23:31:34.868: INFO: Waiting for observed generation 1
Dec 17 23:31:37.922: INFO: Waiting for all required pods to come up
Dec 17 23:31:38.609: INFO: Pod name httpd: Found 10 pods out of 10
STEP: ensuring each pod is running
Dec 17 23:32:07.051: INFO: Waiting for deployment "webserver-deployment" to complete
Dec 17 23:32:07.060: INFO: Updating deployment "webserver-deployment" with a non-existent image
Dec 17 23:32:07.067: INFO: Updating deployment webserver-deployment
Dec 17 23:32:07.067: INFO: Waiting for observed generation 2
Dec 17 23:32:10.205: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8
Dec 17 23:32:10.722: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8
Dec 17 23:32:10.843: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Dec 17 23:32:10.897: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0
Dec 17 23:32:10.897: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5
Dec 17 23:32:10.912: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas
Dec 17 23:32:11.872: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas
Dec 17 23:32:11.873: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30
Dec 17 23:32:11.902: INFO: Updating deployment webserver-deployment
Dec 17 23:32:11.902: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas
Dec 17 23:32:12.367: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20
Dec 17 23:32:14.966: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62
Dec 17 23:32:16.620: INFO: Deployment "webserver-deployment":
&Deployment{ObjectMeta:{webserver-deployment  deployment-8735 /apis/apps/v1/namespaces/deployment-8735/deployments/webserver-deployment 284c23ca-450e-4a5c-9e6b-a74fa3de0841 9160440 3 2019-12-17 23:31:34 +0000 UTC   map[name:httpd] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035d2a28  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:25,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2019-12-17 23:32:10 +0000 UTC,LastTransitionTime:2019-12-17 23:31:34 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2019-12-17 23:32:12 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},}

Dec 17 23:32:17.899: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment":
&ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8  deployment-8735 /apis/apps/v1/namespaces/deployment-8735/replicasets/webserver-deployment-c7997dcc8 76518ee6-0ea7-4744-b5d2-e4eaa8f12cf0 9160459 3 2019-12-17 23:32:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 284c23ca-450e-4a5c-9e6b-a74fa3de0841 0xc0077bb387 0xc0077bb388}] []  []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [] []  []} {[] [] [{httpd webserver:404 [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0077bb3f8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Dec 17 23:32:17.899: INFO: All old ReplicaSets of Deployment "webserver-deployment":
Dec 17 23:32:17.900: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587  deployment-8735 /apis/apps/v1/namespaces/deployment-8735/replicasets/webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 9160427 3 2019-12-17 23:31:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 284c23ca-450e-4a5c-9e6b-a74fa3de0841 0xc0077bb2c7 0xc0077bb2c8}] []  []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0077bb328  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},}
Dec 17 23:32:18.755: INFO: Pod "webserver-deployment-595b5b9587-5ffqh" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5ffqh webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-5ffqh 0b46324a-80ae-42c4-a3aa-d9ac87c2d448 9160414 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc0077bb8b7 0xc0077bb8b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.756: INFO: Pod "webserver-deployment-595b5b9587-5ww9b" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-5ww9b webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-5ww9b cadc296e-d150-4deb-89e0-34cad07a5e4a 9160428 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc0077bb9d7 0xc0077bb9d8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-17 23:32:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.757: INFO: Pod "webserver-deployment-595b5b9587-7jccn" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7jccn webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-7jccn 77a1f677-2e80-4417-acec-50ea676a4a94 9160280 0 2019-12-17 23:31:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc0077bbb37 0xc0077bbb38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:31:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:10.32.0.6,StartTime:2019-12-17 23:31:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-17 23:32:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://659af6306fc9c304320f66693a8b62a26262e80691ff9c5d1af62767bd581209,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.6,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.758: INFO: Pod "webserver-deployment-595b5b9587-7jhdd" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-7jhdd webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-7jhdd 5822b23c-5219-41bb-b305-7cceb2dd2630 9160297 0 2019-12-17 23:31:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc0077bbcb0 0xc0077bbcb1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:31:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.1,StartTime:2019-12-17 23:31:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-17 23:32:01 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://f751b196368327be22b7f2de93fcd2c0dec3030c8a294433053bf264bc0631f6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.1,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.759: INFO: Pod "webserver-deployment-595b5b9587-c2g9r" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-c2g9r webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-c2g9r c8eae911-d45b-42ae-a6e0-f82e575b4861 9160409 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc0077bbe40 0xc0077bbe41}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.759: INFO: Pod "webserver-deployment-595b5b9587-ctvp4" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ctvp4 webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-ctvp4 fcb03eb5-4e29-4828-94d9-aacd38e5d558 9160398 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc0077bbf57 0xc0077bbf58}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.760: INFO: Pod "webserver-deployment-595b5b9587-h7b4p" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-h7b4p webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-h7b4p 2b24d762-ed7a-4c06-8906-5fa2f894f8e9 9160285 0 2019-12-17 23:31:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc005558077 0xc005558078}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:31:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:10.32.0.4,StartTime:2019-12-17 23:31:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-17 23:32:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://dbd445c9377562859e071c019e25eb89ac6f24819548119f40c4cac8948aa419,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.760: INFO: Pod "webserver-deployment-595b5b9587-kdqw2" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-kdqw2 webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-kdqw2 8c7e2564-dbff-4cf7-bb89-0ceafa5ded91 9160294 0 2019-12-17 23:31:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc0055581e0 0xc0055581e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:31:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:10.32.0.7,StartTime:2019-12-17 23:31:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-17 23:32:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://599eabbe24a26ba5d81a37a1660228a290c3494762940e2264636f34e8c6c3e9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.7,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.761: INFO: Pod "webserver-deployment-595b5b9587-ncbjf" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ncbjf webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-ncbjf 953c2c0e-7179-49e8-b087-1c6641081c42 9160421 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc005558340 0xc005558341}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.761: INFO: Pod "webserver-deployment-595b5b9587-ns2ns" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-ns2ns webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-ns2ns 196c308e-c7eb-46af-9b84-f9c39ebbdd9c 9160400 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc005558447 0xc005558448}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.762: INFO: Pod "webserver-deployment-595b5b9587-q84dd" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-q84dd webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-q84dd fd8c58d8-30cd-4683-b26e-795f004f5ec2 9160302 0 2019-12-17 23:31:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc005558567 0xc005558568}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:31:34 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:31:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.5,StartTime:2019-12-17 23:31:34 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-17 23:32:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://6f763468db3b1920ed4e2b26f06d7308fe66d93e4c4905b7c3b739498c8a648f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.762: INFO: Pod "webserver-deployment-595b5b9587-qhps6" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-qhps6 webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-qhps6 6319ad41-b891-49f9-980a-8c59deb90c1d 9160406 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc0055586e0 0xc0055586e1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.763: INFO: Pod "webserver-deployment-595b5b9587-rm2g7" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-rm2g7 webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-rm2g7 e0aea477-3622-41b7-83cc-ebbeaea84f89 9160451 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc0055587f7 0xc0055587f8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-17 23:32:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.763: INFO: Pod "webserver-deployment-595b5b9587-smtdb" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-smtdb webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-smtdb 6039d388-f716-4d04-b05e-fa559217ac6e 9160305 0 2019-12-17 23:31:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc005558957 0xc005558958}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:05 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:31:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:10.44.0.3,StartTime:2019-12-17 23:31:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-17 23:32:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://38fc52933670913db245124d5b38a431841734903c4f091d2c499582a8b0aebd,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.44.0.3,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.764: INFO: Pod "webserver-deployment-595b5b9587-tbrss" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-tbrss webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-tbrss 24b18c35-eb6e-475d-b0e7-61d1a9047312 9160457 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc005558ad0 0xc005558ad1}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:13 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:,StartTime:2019-12-17 23:32:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.764: INFO: Pod "webserver-deployment-595b5b9587-tw9m9" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-tw9m9 webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-tw9m9 1ea5c7fb-e6ea-4169-af3e-771ae571bb18 9160283 0 2019-12-17 23:31:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc005558c17 0xc005558c18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:31:34 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:10.32.0.8,StartTime:2019-12-17 23:31:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-17 23:32:04 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://14ee3cd50797154a5b2564a0c0b0c45124a388b60d7559b6ed3ef5d863215b91,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.8,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.765: INFO: Pod "webserver-deployment-595b5b9587-vshb2" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-vshb2 webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-vshb2 6f05e7da-78d0-4305-ba62-9388d9de7010 9160419 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc005558d80 0xc005558d81}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.766: INFO: Pod "webserver-deployment-595b5b9587-wcks7" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-wcks7 webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-wcks7 ee8a3ddb-bb8b-488a-92d6-953bd2b08953 9160405 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc005558e97 0xc005558e98}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.767: INFO: Pod "webserver-deployment-595b5b9587-xgg9k" is available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xgg9k webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-xgg9k 093c90d6-58e9-41ae-bf1e-e15c1c6d670b 9160289 0 2019-12-17 23:31:34 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc005558fb7 0xc005558fb8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:31:35 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:04 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:31:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:10.32.0.5,StartTime:2019-12-17 23:31:35 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-17 23:32:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:httpd:2.4.38-alpine,ImageID:docker-pullable://httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:docker://e7c1195ec725fe98d81dddac5dd4a46c7f2193192361bfd135e4d8014ee7c821,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.5,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.768: INFO: Pod "webserver-deployment-595b5b9587-xjrhn" is not available:
&Pod{ObjectMeta:{webserver-deployment-595b5b9587-xjrhn webserver-deployment-595b5b9587- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-595b5b9587-xjrhn 32c09e0e-8b05-462b-8ee7-8fa8eb0b2759 9160458 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 b1e5c7ee-4b27-4ccb-a1e4-06c806f7cf4e 0xc005559120 0xc005559121}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:14 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:14 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-17 23:32:14 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.768: INFO: Pod "webserver-deployment-c7997dcc8-5zsz5" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5zsz5 webserver-deployment-c7997dcc8- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-c7997dcc8-5zsz5 6d1add75-d249-4c40-a31f-3d5efabc63bf 9160434 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 76518ee6-0ea7-4744-b5d2-e4eaa8f12cf0 0xc005559277 0xc005559278}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.769: INFO: Pod "webserver-deployment-c7997dcc8-66nwd" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-66nwd webserver-deployment-c7997dcc8- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-c7997dcc8-66nwd d34f93b7-9314-4610-85cc-91bd6ba43577 9160408 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 76518ee6-0ea7-4744-b5d2-e4eaa8f12cf0 0xc005559397 0xc005559398}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.769: INFO: Pod "webserver-deployment-c7997dcc8-b8fjz" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-b8fjz webserver-deployment-c7997dcc8- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-c7997dcc8-b8fjz 6f18ecce-0d47-4db6-a3d8-b45b8cd0bdb2 9160441 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 76518ee6-0ea7-4744-b5d2-e4eaa8f12cf0 0xc0055594b7 0xc0055594b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.770: INFO: Pod "webserver-deployment-c7997dcc8-hz2cf" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-hz2cf webserver-deployment-c7997dcc8- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-c7997dcc8-hz2cf 6fe84fcb-14e5-4861-bd9c-07643e7221cf 9160367 0 2019-12-17 23:32:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 76518ee6-0ea7-4744-b5d2-e4eaa8f12cf0 0xc0055595e7 0xc0055595e8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:09 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:09 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:,StartTime:2019-12-17 23:32:09 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.771: INFO: Pod "webserver-deployment-c7997dcc8-mwnsk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mwnsk webserver-deployment-c7997dcc8- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-c7997dcc8-mwnsk 90be9afa-1f8f-4eb6-a5be-fbdb7c5f7f2a 9160433 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 76518ee6-0ea7-4744-b5d2-e4eaa8f12cf0 0xc005559757 0xc005559758}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.771: INFO: Pod "webserver-deployment-c7997dcc8-r4qtk" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-r4qtk webserver-deployment-c7997dcc8- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-c7997dcc8-r4qtk 2fe586f0-4c61-450b-be39-3d9058fceab6 9160430 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 76518ee6-0ea7-4744-b5d2-e4eaa8f12cf0 0xc005559887 0xc005559888}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.772: INFO: Pod "webserver-deployment-c7997dcc8-rzwmx" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-rzwmx webserver-deployment-c7997dcc8- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-c7997dcc8-rzwmx 4095a16f-a2fc-4fe2-b603-58d71e5a3ce1 9160354 0 2019-12-17 23:32:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 76518ee6-0ea7-4744-b5d2-e4eaa8f12cf0 0xc0055599b7 0xc0055599b8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-17 23:32:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.773: INFO: Pod "webserver-deployment-c7997dcc8-snjsq" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-snjsq webserver-deployment-c7997dcc8- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-c7997dcc8-snjsq ddff3044-f023-4887-8f7e-37988b36efa8 9160407 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 76518ee6-0ea7-4744-b5d2-e4eaa8f12cf0 0xc005559b37 0xc005559b38}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.774: INFO: Pod "webserver-deployment-c7997dcc8-v55wn" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v55wn webserver-deployment-c7997dcc8- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-c7997dcc8-v55wn 50ecfdf5-a899-4272-935b-8da9b6c16116 9160435 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 76518ee6-0ea7-4744-b5d2-e4eaa8f12cf0 0xc005559c77 0xc005559c78}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.774: INFO: Pod "webserver-deployment-c7997dcc8-vzb5n" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-vzb5n webserver-deployment-c7997dcc8- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-c7997dcc8-vzb5n 6942c48c-94a2-4dab-9960-d13b141707b3 9160448 0 2019-12-17 23:32:12 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 76518ee6-0ea7-4744-b5d2-e4eaa8f12cf0 0xc005559da7 0xc005559da8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:12 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:,StartTime:2019-12-17 23:32:12 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.775: INFO: Pod "webserver-deployment-c7997dcc8-w5p6f" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-w5p6f webserver-deployment-c7997dcc8- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-c7997dcc8-w5p6f 155eaceb-468a-40a6-9156-0e7e8cd4da01 9160371 0 2019-12-17 23:32:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 76518ee6-0ea7-4744-b5d2-e4eaa8f12cf0 0xc005559f17 0xc005559f18}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:10 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:10 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-17 23:32:10 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.776: INFO: Pod "webserver-deployment-c7997dcc8-x87rt" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-x87rt webserver-deployment-c7997dcc8- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-c7997dcc8-x87rt 55f58e1e-15cf-40dd-bad3-28799719fd49 9160343 0 2019-12-17 23:32:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 76518ee6-0ea7-4744-b5d2-e4eaa8f12cf0 0xc002c0c0a7 0xc002c0c0a8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-node,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.2.170,PodIP:,StartTime:2019-12-17 23:32:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
Dec 17 23:32:18.776: INFO: Pod "webserver-deployment-c7997dcc8-xpwvl" is not available:
&Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xpwvl webserver-deployment-c7997dcc8- deployment-8735 /api/v1/namespaces/deployment-8735/pods/webserver-deployment-c7997dcc8-xpwvl 96d707fa-0529-4dd8-8258-f28d50ac56d3 9160350 0 2019-12-17 23:32:07 +0000 UTC   map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 76518ee6-0ea7-4744-b5d2-e4eaa8f12cf0 0xc002c0c227 0xc002c0c228}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-72j7d,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-72j7d,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-72j7d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:07 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:07 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:32:07 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:,StartTime:2019-12-17 23:32:07 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:32:18.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-8735" for this suite.
Dec 17 23:33:13.679: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:33:13.801: INFO: namespace deployment-8735 deletion completed in 52.267247661s

• [SLOW TEST:99.001 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support proportional scaling [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl version 
  should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:33:13.803: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should check is all data is printed  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 23:33:13.912: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version'
Dec 17 23:33:14.103: INFO: stderr: ""
Dec 17 23:33:14.104: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.1\", GitCommit:\"d647ddbd755faf07169599a625faf302ffc34458\", GitTreeState:\"clean\", BuildDate:\"2019-12-07T14:58:17Z\", GoVersion:\"go1.12.14\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"16\", GitVersion:\"v1.16.1\", GitCommit:\"d647ddbd755faf07169599a625faf302ffc34458\", GitTreeState:\"clean\", BuildDate:\"2019-10-02T16:51:36Z\", GoVersion:\"go1.12.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n"
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:33:14.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-3598" for this suite.
Dec 17 23:33:20.135: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:33:20.238: INFO: namespace kubectl-3598 deletion completed in 6.127144238s

• [SLOW TEST:6.435 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl version
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1380
    should check is all data is printed  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-cli] Kubectl client Update Demo 
  should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:33:20.239: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[BeforeEach] Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:277
[It] should create and stop a replication controller  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating a replication controller
Dec 17 23:33:20.387: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-8472'
Dec 17 23:33:21.157: INFO: stderr: ""
Dec 17 23:33:21.157: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n"
STEP: waiting for all containers in name=update-demo pods to come up.
Dec 17 23:33:21.158: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8472'
Dec 17 23:33:21.354: INFO: stderr: ""
Dec 17 23:33:21.354: INFO: stdout: "update-demo-nautilus-cfz8f update-demo-nautilus-rzzzp "
Dec 17 23:33:21.354: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cfz8f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8472'
Dec 17 23:33:21.568: INFO: stderr: ""
Dec 17 23:33:21.568: INFO: stdout: ""
Dec 17 23:33:21.568: INFO: update-demo-nautilus-cfz8f is created but not running
Dec 17 23:33:26.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8472'
Dec 17 23:33:26.753: INFO: stderr: ""
Dec 17 23:33:26.754: INFO: stdout: "update-demo-nautilus-cfz8f update-demo-nautilus-rzzzp "
Dec 17 23:33:26.755: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cfz8f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8472'
Dec 17 23:33:27.005: INFO: stderr: ""
Dec 17 23:33:27.006: INFO: stdout: ""
Dec 17 23:33:27.006: INFO: update-demo-nautilus-cfz8f is created but not running
Dec 17 23:33:32.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8472'
Dec 17 23:33:32.694: INFO: stderr: ""
Dec 17 23:33:32.694: INFO: stdout: "update-demo-nautilus-cfz8f update-demo-nautilus-rzzzp "
Dec 17 23:33:32.694: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cfz8f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8472'
Dec 17 23:33:32.797: INFO: stderr: ""
Dec 17 23:33:32.797: INFO: stdout: ""
Dec 17 23:33:32.797: INFO: update-demo-nautilus-cfz8f is created but not running
Dec 17 23:33:37.798: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-8472'
Dec 17 23:33:37.974: INFO: stderr: ""
Dec 17 23:33:37.974: INFO: stdout: "update-demo-nautilus-cfz8f update-demo-nautilus-rzzzp "
Dec 17 23:33:37.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cfz8f -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8472'
Dec 17 23:33:38.160: INFO: stderr: ""
Dec 17 23:33:38.161: INFO: stdout: "true"
Dec 17 23:33:38.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-cfz8f -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8472'
Dec 17 23:33:38.373: INFO: stderr: ""
Dec 17 23:33:38.373: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 17 23:33:38.373: INFO: validating pod update-demo-nautilus-cfz8f
Dec 17 23:33:38.408: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 17 23:33:38.408: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 17 23:33:38.408: INFO: update-demo-nautilus-cfz8f is verified up and running
Dec 17 23:33:38.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rzzzp -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-8472'
Dec 17 23:33:38.582: INFO: stderr: ""
Dec 17 23:33:38.582: INFO: stdout: "true"
Dec 17 23:33:38.583: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-rzzzp -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-8472'
Dec 17 23:33:38.714: INFO: stderr: ""
Dec 17 23:33:38.714: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0"
Dec 17 23:33:38.715: INFO: validating pod update-demo-nautilus-rzzzp
Dec 17 23:33:38.773: INFO: got data: {
  "image": "nautilus.jpg"
}

Dec 17 23:33:38.774: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg .
Dec 17 23:33:38.774: INFO: update-demo-nautilus-rzzzp is verified up and running
STEP: using delete to clean up resources
Dec 17 23:33:38.775: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-8472'
Dec 17 23:33:38.924: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n"
Dec 17 23:33:38.925: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n"
Dec 17 23:33:38.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-8472'
Dec 17 23:33:39.052: INFO: stderr: "No resources found in kubectl-8472 namespace.\n"
Dec 17 23:33:39.053: INFO: stdout: ""
Dec 17 23:33:39.053: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-8472 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}'
Dec 17 23:33:39.168: INFO: stderr: ""
Dec 17 23:33:39.168: INFO: stdout: ""
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:33:39.169: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-8472" for this suite.
Dec 17 23:34:07.267: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:34:07.432: INFO: namespace kubectl-8472 deletion completed in 28.226843152s

• [SLOW TEST:47.194 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Update Demo
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:275
    should create and stop a replication controller  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:34:07.434: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
[It] should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating service multi-endpoint-test in namespace services-2435
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2435 to expose endpoints map[]
Dec 17 23:34:07.668: INFO: Get endpoints failed (28.98253ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found
Dec 17 23:34:08.676: INFO: successfully validated that service multi-endpoint-test in namespace services-2435 exposes endpoints map[] (1.036852361s elapsed)
STEP: Creating pod pod1 in namespace services-2435
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2435 to expose endpoints map[pod1:[100]]
Dec 17 23:34:12.884: INFO: Unexpected endpoints: found map[], expected map[pod1:[100]] (4.194941964s elapsed, will retry)
Dec 17 23:34:17.026: INFO: successfully validated that service multi-endpoint-test in namespace services-2435 exposes endpoints map[pod1:[100]] (8.336541834s elapsed)
STEP: Creating pod pod2 in namespace services-2435
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2435 to expose endpoints map[pod1:[100] pod2:[101]]
Dec 17 23:34:21.458: INFO: Unexpected endpoints: found map[b26ca07b-b5b0-41a7-94f5-cc832fdba7c2:[100]], expected map[pod1:[100] pod2:[101]] (4.408421309s elapsed, will retry)
Dec 17 23:34:24.695: INFO: successfully validated that service multi-endpoint-test in namespace services-2435 exposes endpoints map[pod1:[100] pod2:[101]] (7.645459146s elapsed)
STEP: Deleting pod pod1 in namespace services-2435
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2435 to expose endpoints map[pod2:[101]]
Dec 17 23:34:24.763: INFO: successfully validated that service multi-endpoint-test in namespace services-2435 exposes endpoints map[pod2:[101]] (57.826294ms elapsed)
STEP: Deleting pod pod2 in namespace services-2435
STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-2435 to expose endpoints map[]
Dec 17 23:34:25.861: INFO: successfully validated that service multi-endpoint-test in namespace services-2435 exposes endpoints map[] (1.070904704s elapsed)
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:34:25.972: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-2435" for this suite.
Dec 17 23:34:54.013: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:34:54.121: INFO: namespace services-2435 deletion completed in 28.13800633s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95

• [SLOW TEST:46.688 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should serve multiport endpoints from pods  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:34:54.123: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0666 on node default medium
Dec 17 23:34:54.199: INFO: Waiting up to 5m0s for pod "pod-a20d5fa0-3f7d-41d9-84e7-0da4c5d9ce6c" in namespace "emptydir-1176" to be "success or failure"
Dec 17 23:34:54.206: INFO: Pod "pod-a20d5fa0-3f7d-41d9-84e7-0da4c5d9ce6c": Phase="Pending", Reason="", readiness=false. Elapsed: 7.14323ms
Dec 17 23:34:56.216: INFO: Pod "pod-a20d5fa0-3f7d-41d9-84e7-0da4c5d9ce6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.017284222s
Dec 17 23:34:58.225: INFO: Pod "pod-a20d5fa0-3f7d-41d9-84e7-0da4c5d9ce6c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.026272505s
Dec 17 23:35:00.242: INFO: Pod "pod-a20d5fa0-3f7d-41d9-84e7-0da4c5d9ce6c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.042734397s
Dec 17 23:35:02.250: INFO: Pod "pod-a20d5fa0-3f7d-41d9-84e7-0da4c5d9ce6c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.051009319s
Dec 17 23:35:04.258: INFO: Pod "pod-a20d5fa0-3f7d-41d9-84e7-0da4c5d9ce6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.058953046s
STEP: Saw pod success
Dec 17 23:35:04.258: INFO: Pod "pod-a20d5fa0-3f7d-41d9-84e7-0da4c5d9ce6c" satisfied condition "success or failure"
Dec 17 23:35:04.261: INFO: Trying to get logs from node jerma-node pod pod-a20d5fa0-3f7d-41d9-84e7-0da4c5d9ce6c container test-container: 
STEP: delete the pod
Dec 17 23:35:04.324: INFO: Waiting for pod pod-a20d5fa0-3f7d-41d9-84e7-0da4c5d9ce6c to disappear
Dec 17 23:35:04.327: INFO: Pod pod-a20d5fa0-3f7d-41d9-84e7-0da4c5d9ce6c no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:35:04.327: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-1176" for this suite.
Dec 17 23:35:10.361: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:35:10.568: INFO: namespace emptydir-1176 deletion completed in 6.236415866s

• [SLOW TEST:16.446 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:35:10.570: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 23:35:10.770: INFO: Waiting up to 5m0s for pod "downwardapi-volume-99bf1e21-a3fa-4263-8d5e-bc50f59aad30" in namespace "projected-5359" to be "success or failure"
Dec 17 23:35:10.813: INFO: Pod "downwardapi-volume-99bf1e21-a3fa-4263-8d5e-bc50f59aad30": Phase="Pending", Reason="", readiness=false. Elapsed: 43.317444ms
Dec 17 23:35:12.819: INFO: Pod "downwardapi-volume-99bf1e21-a3fa-4263-8d5e-bc50f59aad30": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048679208s
Dec 17 23:35:14.835: INFO: Pod "downwardapi-volume-99bf1e21-a3fa-4263-8d5e-bc50f59aad30": Phase="Pending", Reason="", readiness=false. Elapsed: 4.064762761s
Dec 17 23:35:16.847: INFO: Pod "downwardapi-volume-99bf1e21-a3fa-4263-8d5e-bc50f59aad30": Phase="Pending", Reason="", readiness=false. Elapsed: 6.076783938s
Dec 17 23:35:18.860: INFO: Pod "downwardapi-volume-99bf1e21-a3fa-4263-8d5e-bc50f59aad30": Phase="Pending", Reason="", readiness=false. Elapsed: 8.09002016s
Dec 17 23:35:20.870: INFO: Pod "downwardapi-volume-99bf1e21-a3fa-4263-8d5e-bc50f59aad30": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.099728237s
STEP: Saw pod success
Dec 17 23:35:20.870: INFO: Pod "downwardapi-volume-99bf1e21-a3fa-4263-8d5e-bc50f59aad30" satisfied condition "success or failure"
Dec 17 23:35:20.875: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-99bf1e21-a3fa-4263-8d5e-bc50f59aad30 container client-container: 
STEP: delete the pod
Dec 17 23:35:20.992: INFO: Waiting for pod downwardapi-volume-99bf1e21-a3fa-4263-8d5e-bc50f59aad30 to disappear
Dec 17 23:35:20.997: INFO: Pod downwardapi-volume-99bf1e21-a3fa-4263-8d5e-bc50f59aad30 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:35:20.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5359" for this suite.
Dec 17 23:35:27.039: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:35:27.189: INFO: namespace projected-5359 deletion completed in 6.183384284s

• [SLOW TEST:16.620 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's memory request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSS
------------------------------
[k8s.io] Docker Containers 
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:35:27.190: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename containers
STEP: Waiting for a default service account to be provisioned in namespace
[It] should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[AfterEach] [k8s.io] Docker Containers
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:35:37.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "containers-1226" for this suite.
Dec 17 23:36:07.454: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:36:07.660: INFO: namespace containers-1226 deletion completed in 30.243330132s

• [SLOW TEST:40.471 seconds]
[k8s.io] Docker Containers
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should use the image defaults if command and args are blank [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSS
------------------------------
[sig-apps] Deployment 
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:36:07.661: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename deployment
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:68
[It] deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 23:36:07.948: INFO: Pod name rollover-pod: Found 1 pods out of 1
STEP: ensuring each pod is running
Dec 17 23:36:16.019: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready
Dec 17 23:36:18.034: INFO: Creating deployment "test-rollover-deployment"
Dec 17 23:36:18.051: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations
Dec 17 23:36:20.064: INFO: Check revision of new replica set for deployment "test-rollover-deployment"
Dec 17 23:36:20.074: INFO: Ensure that both replica sets have 1 created replica
Dec 17 23:36:20.083: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update
Dec 17 23:36:20.097: INFO: Updating deployment test-rollover-deployment
Dec 17 23:36:20.097: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller
Dec 17 23:36:22.210: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2
Dec 17 23:36:22.223: INFO: Make sure deployment "test-rollover-deployment" is complete
Dec 17 23:36:22.238: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 23:36:22.238: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222580, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 23:36:24.263: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 23:36:24.263: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222580, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 23:36:26.260: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 23:36:26.261: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222580, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 23:36:28.263: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 23:36:28.263: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222588, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 23:36:30.256: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 23:36:30.256: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222588, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 23:36:32.461: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 23:36:32.461: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222588, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 23:36:34.253: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 23:36:34.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222588, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 23:36:36.258: INFO: all replica sets need to contain the pod-template-hash label
Dec 17 23:36:36.258: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222588, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 23:36:38.277: INFO: 
Dec 17 23:36:38.277: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:2, UnavailableReplicas:0, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222598, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222578, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-7d7dc6548c\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 23:36:40.255: INFO: 
Dec 17 23:36:40.255: INFO: Ensure that both old replica sets have no replicas
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:62
Dec 17 23:36:40.267: INFO: Deployment "test-rollover-deployment":
&Deployment{ObjectMeta:{test-rollover-deployment  deployment-2935 /apis/apps/v1/namespaces/deployment-2935/deployments/test-rollover-deployment 2c174a49-7457-4be8-91b5-27e3730ab278 9161256 2 2019-12-17 23:36:18 +0000 UTC   map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] []  []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod] map[] [] []  []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc006c9a0a8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2019-12-17 23:36:18 +0000 UTC,LastTransitionTime:2019-12-17 23:36:18 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-7d7dc6548c" has successfully progressed.,LastUpdateTime:2019-12-17 23:36:38 +0000 UTC,LastTransitionTime:2019-12-17 23:36:18 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},}

Dec 17 23:36:40.272: INFO: New ReplicaSet "test-rollover-deployment-7d7dc6548c" of Deployment "test-rollover-deployment":
&ReplicaSet{ObjectMeta:{test-rollover-deployment-7d7dc6548c  deployment-2935 /apis/apps/v1/namespaces/deployment-2935/replicasets/test-rollover-deployment-7d7dc6548c 08fc0c7b-dac0-42f2-99c5-615e21dc108c 9161245 2 2019-12-17 23:36:20 +0000 UTC   map[name:rollover-pod pod-template-hash:7d7dc6548c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 2c174a49-7457-4be8-91b5-27e3730ab278 0xc006c9a557 0xc006c9a558}] []  []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 7d7dc6548c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:7d7dc6548c] map[] [] []  []} {[] [] [{redis docker.io/library/redis:5.0.5-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc006c9a5b8  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},}
Dec 17 23:36:40.272: INFO: All old ReplicaSets of Deployment "test-rollover-deployment":
Dec 17 23:36:40.272: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller  deployment-2935 /apis/apps/v1/namespaces/deployment-2935/replicasets/test-rollover-controller 3bef2533-a8a0-454d-bd41-4819dd49528e 9161255 2 2019-12-17 23:36:07 +0000 UTC   map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 2c174a49-7457-4be8-91b5-27e3730ab278 0xc006c9a45f 0xc006c9a470}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod:httpd] map[] [] []  []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc006c9a4e8  ClusterFirst map[]     false false false  PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Dec 17 23:36:40.272: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c  deployment-2935 /apis/apps/v1/namespaces/deployment-2935/replicasets/test-rollover-deployment-f6c94f66c 620e486e-6215-497d-a74e-76ff647fedaa 9161211 2 2019-12-17 23:36:18 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 2c174a49-7457-4be8-91b5-27e3730ab278 0xc006c9a620 0xc006c9a621}] []  []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{      0 0001-01-01 00:00:00 +0000 UTC   map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] []  []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] []  [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc006c9a698  ClusterFirst map[]     false false false  &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} []   nil default-scheduler [] []   nil []    map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},}
Dec 17 23:36:40.276: INFO: Pod "test-rollover-deployment-7d7dc6548c-cbb7v" is available:
&Pod{ObjectMeta:{test-rollover-deployment-7d7dc6548c-cbb7v test-rollover-deployment-7d7dc6548c- deployment-2935 /api/v1/namespaces/deployment-2935/pods/test-rollover-deployment-7d7dc6548c-cbb7v 8d04c963-7621-4c14-ac56-cf2368394d7d 9161229 0 2019-12-17 23:36:20 +0000 UTC   map[name:rollover-pod pod-template-hash:7d7dc6548c] map[] [{apps/v1 ReplicaSet test-rollover-deployment-7d7dc6548c 08fc0c7b-dac0-42f2-99c5-615e21dc108c 0xc00083fdc7 0xc00083fdc8}] []  []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b85fz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b85fz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:redis,Image:docker.io/library/redis:5.0.5-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b85fz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-server-4b75xjbddvit,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:36:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:36:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:36:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2019-12-17 23:36:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:10.96.3.35,PodIP:10.32.0.4,StartTime:2019-12-17 23:36:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:redis,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2019-12-17 23:36:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:redis:5.0.5-alpine,ImageID:docker-pullable://redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858,ContainerID:docker://b400cf50d8692686a6acd187b49ea2239f9961fad9050ba5315244df36c53b36,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.32.0.4,},},EphemeralContainerStatuses:[]ContainerStatus{},},}
[AfterEach] [sig-apps] Deployment
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:36:40.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "deployment-2935" for this suite.
Dec 17 23:36:48.339: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:36:48.716: INFO: namespace deployment-2935 deletion completed in 8.43284972s

• [SLOW TEST:41.055 seconds]
[sig-apps] Deployment
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  deployment should support rollover [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSS
------------------------------
[sig-storage] Projected downwardAPI 
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:36:48.718: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 23:36:48.865: INFO: Waiting up to 5m0s for pod "downwardapi-volume-fb9ff8fb-8017-4a9e-ba49-85f62fac1ed0" in namespace "projected-5465" to be "success or failure"
Dec 17 23:36:48.900: INFO: Pod "downwardapi-volume-fb9ff8fb-8017-4a9e-ba49-85f62fac1ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 34.675046ms
Dec 17 23:36:50.911: INFO: Pod "downwardapi-volume-fb9ff8fb-8017-4a9e-ba49-85f62fac1ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045961145s
Dec 17 23:36:52.921: INFO: Pod "downwardapi-volume-fb9ff8fb-8017-4a9e-ba49-85f62fac1ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055729084s
Dec 17 23:36:54.929: INFO: Pod "downwardapi-volume-fb9ff8fb-8017-4a9e-ba49-85f62fac1ed0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.064029956s
Dec 17 23:36:56.939: INFO: Pod "downwardapi-volume-fb9ff8fb-8017-4a9e-ba49-85f62fac1ed0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.074458915s
STEP: Saw pod success
Dec 17 23:36:56.940: INFO: Pod "downwardapi-volume-fb9ff8fb-8017-4a9e-ba49-85f62fac1ed0" satisfied condition "success or failure"
Dec 17 23:36:56.944: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-fb9ff8fb-8017-4a9e-ba49-85f62fac1ed0 container client-container: 
STEP: delete the pod
Dec 17 23:36:56.986: INFO: Waiting for pod downwardapi-volume-fb9ff8fb-8017-4a9e-ba49-85f62fac1ed0 to disappear
Dec 17 23:36:56.995: INFO: Pod downwardapi-volume-fb9ff8fb-8017-4a9e-ba49-85f62fac1ed0 no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:36:56.995: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-5465" for this suite.
Dec 17 23:37:03.190: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:37:03.533: INFO: namespace projected-5465 deletion completed in 6.533140494s

• [SLOW TEST:14.815 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide podname only [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:37:03.534: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name projected-configmap-test-volume-672725ce-dde6-4c57-a586-595371fefdef
STEP: Creating a pod to test consume configMaps
Dec 17 23:37:03.665: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e753c0c6-9fcc-4a2a-a094-05376965f80d" in namespace "projected-4772" to be "success or failure"
Dec 17 23:37:03.696: INFO: Pod "pod-projected-configmaps-e753c0c6-9fcc-4a2a-a094-05376965f80d": Phase="Pending", Reason="", readiness=false. Elapsed: 30.482657ms
Dec 17 23:37:05.707: INFO: Pod "pod-projected-configmaps-e753c0c6-9fcc-4a2a-a094-05376965f80d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.041908594s
Dec 17 23:37:07.719: INFO: Pod "pod-projected-configmaps-e753c0c6-9fcc-4a2a-a094-05376965f80d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.053292145s
Dec 17 23:37:09.737: INFO: Pod "pod-projected-configmaps-e753c0c6-9fcc-4a2a-a094-05376965f80d": Phase="Pending", Reason="", readiness=false. Elapsed: 6.071652398s
Dec 17 23:37:11.745: INFO: Pod "pod-projected-configmaps-e753c0c6-9fcc-4a2a-a094-05376965f80d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.0797833s
STEP: Saw pod success
Dec 17 23:37:11.746: INFO: Pod "pod-projected-configmaps-e753c0c6-9fcc-4a2a-a094-05376965f80d" satisfied condition "success or failure"
Dec 17 23:37:11.749: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-e753c0c6-9fcc-4a2a-a094-05376965f80d container projected-configmap-volume-test: 
STEP: delete the pod
Dec 17 23:37:11.829: INFO: Waiting for pod pod-projected-configmaps-e753c0c6-9fcc-4a2a-a094-05376965f80d to disappear
Dec 17 23:37:11.844: INFO: Pod pod-projected-configmaps-e753c0c6-9fcc-4a2a-a094-05376965f80d no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:37:11.844: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-4772" for this suite.
Dec 17 23:37:17.902: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:37:18.005: INFO: namespace projected-4772 deletion completed in 6.154404809s

• [SLOW TEST:14.472 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] 
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:37:18.006: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename webhook
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:88
STEP: Setting up server cert
STEP: Create role binding to let webhook read extension-apiserver-authentication
STEP: Deploying the webhook pod
STEP: Wait for the deployment to be ready
Dec 17 23:37:19.460: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set
Dec 17 23:37:21.566: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222639, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222639, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222639, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222639, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 23:37:23.584: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222639, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222639, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222639, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222639, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 23:37:25.658: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222639, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222639, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222639, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222639, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
Dec 17 23:37:27.578: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222639, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222639, loc:(*time.Location)(0x8492160)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222639, loc:(*time.Location)(0x8492160)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712222639, loc:(*time.Location)(0x8492160)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-86d95b659d\" is progressing."}}, CollisionCount:(*int32)(nil)}
STEP: Deploying the webhook service
STEP: Verifying the service has paired with the endpoint
Dec 17 23:37:30.607: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1
[It] should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Setting timeout (1s) shorter than webhook latency (5s)
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s)
STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is longer than webhook latency
STEP: Registering slow webhook via the AdmissionRegistration API
STEP: Having no error when timeout is empty (defaulted to 10s in v1)
STEP: Registering slow webhook via the AdmissionRegistration API
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:37:42.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "webhook-9982" for this suite.
Dec 17 23:37:48.998: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:37:49.115: INFO: namespace webhook-9982 deletion completed in 6.17557947s
STEP: Destroying namespace "webhook-9982-markers" for this suite.
Dec 17 23:37:55.204: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:37:55.313: INFO: namespace webhook-9982-markers deletion completed in 6.197888683s
[AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:103

• [SLOW TEST:37.324 seconds]
[sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should honor timeout [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] 
  Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:37:55.331: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename statefulset
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:62
[BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:77
STEP: Creating service test in namespace statefulset-5711
[It] Should recreate evicted statefulset [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Looking for a node to schedule stateful set and pod
STEP: Creating pod with conflicting port in namespace statefulset-5711
STEP: Creating statefulset with conflicting port in namespace statefulset-5711
STEP: Waiting until pod test-pod will start running in namespace statefulset-5711
STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-5711
Dec 17 23:38:05.538: INFO: Observed stateful pod in namespace: statefulset-5711, name: ss-0, uid: 0662e4a9-f59f-4381-950d-4a368ea25336, status phase: Pending. Waiting for statefulset controller to delete.
Dec 17 23:38:06.639: INFO: Observed stateful pod in namespace: statefulset-5711, name: ss-0, uid: 0662e4a9-f59f-4381-950d-4a368ea25336, status phase: Failed. Waiting for statefulset controller to delete.
Dec 17 23:38:06.689: INFO: Observed stateful pod in namespace: statefulset-5711, name: ss-0, uid: 0662e4a9-f59f-4381-950d-4a368ea25336, status phase: Failed. Waiting for statefulset controller to delete.
Dec 17 23:38:06.707: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-5711
STEP: Removing pod with conflicting port in namespace statefulset-5711
STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-5711 and will be in running state
Dec 17 23:43:06.834: FAIL: Timed out after 300.001s.
Expected
    <*errors.errorString | 0xc00078f910>: {
        s: "pod ss-0 is not in running phase: Pending",
    }
to be nil
[AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:88
Dec 17 23:43:06.842: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe po ss-0 --namespace=statefulset-5711'
Dec 17 23:43:08.952: INFO: stderr: ""
Dec 17 23:43:08.953: INFO: stdout: "Name:           ss-0\nNamespace:      statefulset-5711\nPriority:       0\nNode:           jerma-node/\nLabels:         baz=blah\n                controller-revision-hash=ss-5c959bc8d4\n                foo=bar\n                statefulset.kubernetes.io/pod-name=ss-0\nAnnotations:    \nStatus:         Pending\nIP:             \nIPs:            \nControlled By:  StatefulSet/ss\nContainers:\n  webserver:\n    Image:        docker.io/library/httpd:2.4.38-alpine\n    Port:         21017/TCP\n    Host Port:    21017/TCP\n    Environment:  \n    Mounts:\n      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rzsvs (ro)\nVolumes:\n  default-token-rzsvs:\n    Type:        Secret (a volume populated by a Secret)\n    SecretName:  default-token-rzsvs\n    Optional:    false\nQoS Class:       BestEffort\nNode-Selectors:  \nTolerations:     node.kubernetes.io/not-ready:NoExecute for 300s\n                 node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n  Type     Reason            Age   From                 Message\n  ----     ------            ----  ----                 -------\n  Warning  PodFitsHostPorts  5m2s  kubelet, jerma-node  Predicate PodFitsHostPorts failed\n"
Dec 17 23:43:08.953: INFO: 
Output of kubectl describe ss-0:
Name:           ss-0
Namespace:      statefulset-5711
Priority:       0
Node:           jerma-node/
Labels:         baz=blah
                controller-revision-hash=ss-5c959bc8d4
                foo=bar
                statefulset.kubernetes.io/pod-name=ss-0
Annotations:    
Status:         Pending
IP:             
IPs:            
Controlled By:  StatefulSet/ss
Containers:
  webserver:
    Image:        docker.io/library/httpd:2.4.38-alpine
    Port:         21017/TCP
    Host Port:    21017/TCP
    Environment:  
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-rzsvs (ro)
Volumes:
  default-token-rzsvs:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-rzsvs
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  
Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                 node.kubernetes.io/unreachable:NoExecute for 300s
Events:
  Type     Reason            Age   From                 Message
  ----     ------            ----  ----                 -------
  Warning  PodFitsHostPorts  5m2s  kubelet, jerma-node  Predicate PodFitsHostPorts failed

Dec 17 23:43:08.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs ss-0 --namespace=statefulset-5711 --tail=100'
Dec 17 23:43:09.141: INFO: rc: 1
Dec 17 23:43:09.143: INFO: 
Last 100 log lines of ss-0:

Dec 17 23:43:09.143: INFO: Deleting all statefulset in ns statefulset-5711
Dec 17 23:43:09.152: INFO: Scaling statefulset ss to 0
Dec 17 23:43:19.209: INFO: Waiting for statefulset status.replicas updated to 0
Dec 17 23:43:19.215: INFO: Deleting statefulset ss
[AfterEach] [sig-apps] StatefulSet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
STEP: Collecting events from namespace "statefulset-5711".
STEP: Found 13 events.
Dec 17 23:43:19.252: INFO: At 2019-12-17 23:37:55 +0000 UTC - event for ss: {statefulset-controller } SuccessfulCreate: create Pod ss-0 in StatefulSet ss successful
Dec 17 23:43:19.252: INFO: At 2019-12-17 23:37:55 +0000 UTC - event for ss: {statefulset-controller } SuccessfulDelete: delete Pod ss-0 in StatefulSet ss successful
Dec 17 23:43:19.252: INFO: At 2019-12-17 23:37:55 +0000 UTC - event for ss: {statefulset-controller } RecreatingFailedPod: StatefulSet statefulset-5711/ss is recreating failed Pod ss-0
Dec 17 23:43:19.252: INFO: At 2019-12-17 23:37:55 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 17 23:43:19.252: INFO: At 2019-12-17 23:37:55 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 17 23:43:19.252: INFO: At 2019-12-17 23:37:56 +0000 UTC - event for ss: {statefulset-controller } FailedCreate: create Pod ss-0 in StatefulSet ss failed error: The POST operation against Pod could not be completed at this time, please try again.
Dec 17 23:43:19.252: INFO: At 2019-12-17 23:37:56 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 17 23:43:19.252: INFO: At 2019-12-17 23:37:57 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 17 23:43:19.252: INFO: At 2019-12-17 23:37:59 +0000 UTC - event for test-pod: {kubelet jerma-node} Pulled: Container image "docker.io/library/httpd:2.4.38-alpine" already present on machine
Dec 17 23:43:19.252: INFO: At 2019-12-17 23:38:02 +0000 UTC - event for test-pod: {kubelet jerma-node} Created: Created container webserver
Dec 17 23:43:19.252: INFO: At 2019-12-17 23:38:03 +0000 UTC - event for test-pod: {kubelet jerma-node} Started: Started container webserver
Dec 17 23:43:19.252: INFO: At 2019-12-17 23:38:06 +0000 UTC - event for ss-0: {kubelet jerma-node} PodFitsHostPorts: Predicate PodFitsHostPorts failed
Dec 17 23:43:19.252: INFO: At 2019-12-17 23:38:06 +0000 UTC - event for test-pod: {kubelet jerma-node} Killing: Stopping container webserver
Dec 17 23:43:19.256: INFO: POD  NODE  PHASE  GRACE  CONDITIONS
Dec 17 23:43:19.256: INFO: 
Dec 17 23:43:19.276: INFO: 
Logging node info for node jerma-node
Dec 17 23:43:19.282: INFO: Node Info: &Node{ObjectMeta:{jerma-node   /api/v1/nodes/jerma-node 77a1de86-fa0a-4097-aa1b-ddd3667d796b 9161979 0 2019-10-12 13:47:49 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-node kubernetes.io/os:linux] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136013824 0} {} 4039076Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031156224 0} {} 3936676Ki BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-12-17 21:23:22 +0000 UTC,LastTransitionTime:2019-12-17 21:23:22 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-12-17 23:42:55 +0000 UTC,LastTransitionTime:2019-10-12 13:47:49 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-12-17 23:42:55 +0000 UTC,LastTransitionTime:2019-10-12 13:47:49 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-12-17 23:42:55 +0000 UTC,LastTransitionTime:2019-10-12 13:47:49 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-12-17 23:42:55 +0000 UTC,LastTransitionTime:2019-10-12 13:48:29 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.2.170,},NodeAddress{Type:Hostname,Address:jerma-node,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:4eaf1504b38c4046a625a134490a5292,SystemUUID:4EAF1504-B38C-4046-A625-A134490A5292,BootID:be260572-5100-4207-9fbc-2294735ff8aa,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.16.1,KubeProxyVersion:v1.16.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:4afb99b4690b418ffc2ceb67e1a17376457e441c1f09ab55447f0aaf992fa646 k8s.gcr.io/etcd:3.4.3],SizeBytes:288426917,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15],SizeBytes:246640776,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/jessie-dnsutils@sha256:ad583e33cb284f7ef046673809b146ec4053cda19b54a85d2b180a86169715eb gcr.io/kubernetes-e2e-test-images/jessie-dnsutils:1.0],SizeBytes:195659796,},ContainerImage{Names:[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2],SizeBytes:148150868,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:adb4d547241d08bbb25a928b7356b9f122c4a2e81abfe47aebdd659097e79dbc k8s.gcr.io/kube-proxy:v1.16.1],SizeBytes:86061020,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/sample-apiserver@sha256:1bafcc6fb1aa990b487850adba9cadc020e42d7905aa8a30481182a477ba24b0 gcr.io/kubernetes-e2e-test-images/sample-apiserver:1.10],SizeBytes:61365829,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2],SizeBytes:49569458,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/google-samples/hello-go-gke@sha256:4ea9cd3d35f81fc91bdebca3fae50c180a1048be0613ad0f811595365040396e gcr.io/google-samples/hello-go-gke:1.0],SizeBytes:11443478,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/dnsutils@sha256:2abeee84efb79c14d731966e034af33bf324d3b26ca28497555511ff094b3ddd gcr.io/kubernetes-e2e-test-images/dnsutils:1.1],SizeBytes:9349974,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nonewprivs@sha256:10066e9039219449fe3c81f38fe01928f87914150768ab81b62a468e51fa7411 gcr.io/kubernetes-e2e-test-images/nonewprivs:1.0],SizeBytes:6757579,},ContainerImage{Names:[appropriate/curl@sha256:c8bf5bbec6397465a247c2bb3e589bb77e4f62ff88a027175ecb2d9e4f12c9d7 appropriate/curl:latest],SizeBytes:5496756,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/test-webserver@sha256:7f93d6e32798ff28bc6289254d0c2867fe2c849c8e46edc50f8624734309812e gcr.io/kubernetes-e2e-test-images/test-webserver:1.0],SizeBytes:4732240,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest-user@sha256:17319ca525ee003681fccf7e8c6b1b910ff4f49b653d939ac7f9b6e7c463933d gcr.io/kubernetes-e2e-test-images/mounttest-user:1.0],SizeBytes:1450451,},ContainerImage{Names:[busybox@sha256:1303dbf110c57f3edf68d9f5a16c082ec06c4cf7604831669faf2c712260b5a0 busybox@sha256:b91fb3b63e212bb0d3dd0461025b969705b1df565a8bd454bd5095aa7bea9221],SizeBytes:1219790,},ContainerImage{Names:[busybox@sha256:1828edd60c5efd34b2bf5dd3282ec0cc04d47b2ff9caa0b6d4f07a21d1c08084 busybox:latest],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:fe301db49df08c384001ed752dff6d52b4305a73a7f608f21528048e8a08b51e],SizeBytes:1219782,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Dec 17 23:43:19.285: INFO: 
Logging kubelet events for node jerma-node
Dec 17 23:43:19.292: INFO: 
Logging pods the kubelet thinks is on node jerma-node
Dec 17 23:43:19.303: INFO: weave-net-srfjj started at 2019-12-17 21:23:16 +0000 UTC (0+2 container statuses recorded)
Dec 17 23:43:19.303: INFO: 	Container weave ready: true, restart count 0
Dec 17 23:43:19.303: INFO: 	Container weave-npc ready: true, restart count 0
Dec 17 23:43:19.303: INFO: kube-proxy-jcjl4 started at 2019-10-12 13:47:49 +0000 UTC (0+1 container statuses recorded)
Dec 17 23:43:19.303: INFO: 	Container kube-proxy ready: true, restart count 0
W1217 23:43:19.309288       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 17 23:43:19.348: INFO: 
Latency metrics for node jerma-node
Dec 17 23:43:19.348: INFO: 
Logging node info for node jerma-server-4b75xjbddvit
Dec 17 23:43:19.380: INFO: Node Info: &Node{ObjectMeta:{jerma-server-4b75xjbddvit   /api/v1/nodes/jerma-server-4b75xjbddvit 65247a99-359d-4f89-a587-9b1e2846985b 9161993 0 2019-10-12 13:29:03 +0000 UTC   map[beta.kubernetes.io/arch:amd64 beta.kubernetes.io/os:linux kubernetes.io/arch:amd64 kubernetes.io/hostname:jerma-server-4b75xjbddvit kubernetes.io/os:linux node-role.kubernetes.io/master:] map[kubeadm.alpha.kubernetes.io/cri-socket:/var/run/dockershim.sock node.alpha.kubernetes.io/ttl:0 volumes.kubernetes.io/controller-managed-attach-detach:true] [] []  []},Spec:NodeSpec{PodCIDR:,DoNotUse_ExternalID:,ProviderID:,Unschedulable:false,Taints:[]Taint{},ConfigSource:nil,PodCIDRs:[],},Status:NodeStatus{Capacity:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{20629221376 0} {} 20145724Ki BinarySI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4136026112 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Allocatable:ResourceList{cpu: {{4 0} {} 4 DecimalSI},ephemeral-storage: {{18566299208 0} {} 18566299208 DecimalSI},hugepages-2Mi: {{0 0} {} 0 DecimalSI},memory: {{4031168512 0} {}  BinarySI},pods: {{110 0} {} 110 DecimalSI},},Phase:,Conditions:[]NodeCondition{NodeCondition{Type:NetworkUnavailable,Status:False,LastHeartbeatTime:2019-12-13 09:17:15 +0000 UTC,LastTransitionTime:2019-12-13 09:17:15 +0000 UTC,Reason:WeaveIsUp,Message:Weave pod has set this,},NodeCondition{Type:MemoryPressure,Status:False,LastHeartbeatTime:2019-12-17 23:43:06 +0000 UTC,LastTransitionTime:2019-10-12 13:29:03 +0000 UTC,Reason:KubeletHasSufficientMemory,Message:kubelet has sufficient memory available,},NodeCondition{Type:DiskPressure,Status:False,LastHeartbeatTime:2019-12-17 23:43:06 +0000 UTC,LastTransitionTime:2019-12-13 09:12:52 +0000 UTC,Reason:KubeletHasNoDiskPressure,Message:kubelet has no disk pressure,},NodeCondition{Type:PIDPressure,Status:False,LastHeartbeatTime:2019-12-17 23:43:06 +0000 UTC,LastTransitionTime:2019-10-12 13:29:03 +0000 UTC,Reason:KubeletHasSufficientPID,Message:kubelet has sufficient PID available,},NodeCondition{Type:Ready,Status:True,LastHeartbeatTime:2019-12-17 23:43:06 +0000 UTC,LastTransitionTime:2019-10-12 13:29:53 +0000 UTC,Reason:KubeletReady,Message:kubelet is posting ready status. AppArmor enabled,},},Addresses:[]NodeAddress{NodeAddress{Type:InternalIP,Address:10.96.3.35,},NodeAddress{Type:Hostname,Address:jerma-server-4b75xjbddvit,},},DaemonEndpoints:NodeDaemonEndpoints{KubeletEndpoint:DaemonEndpoint{Port:10250,},},NodeInfo:NodeSystemInfo{MachineID:c617e976dd6040539102788a191b2ea4,SystemUUID:C617E976-DD60-4053-9102-788A191B2EA4,BootID:b7792a6d-7352-4851-9822-f2fa8fe18763,KernelVersion:4.15.0-52-generic,OSImage:Ubuntu 18.04.2 LTS,ContainerRuntimeVersion:docker://18.9.7,KubeletVersion:v1.16.1,KubeProxyVersion:v1.16.1,OperatingSystem:linux,Architecture:amd64,},Images:[]ContainerImage{ContainerImage{Names:[gcr.io/google-samples/gb-frontend@sha256:35cb427341429fac3df10ff74600ea73e8ec0754d78f9ce89e0b4f3d70d53ba6 gcr.io/google-samples/gb-frontend:v6],SizeBytes:373099368,},ContainerImage{Names:[k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa k8s.gcr.io/etcd:3.3.15 k8s.gcr.io/etcd:3.3.15-0],SizeBytes:246640776,},ContainerImage{Names:[k8s.gcr.io/kube-apiserver@sha256:80feeaed6c6445ab0ea0c27153354c3cac19b8b028d9b14fc134f947e716e25e k8s.gcr.io/kube-apiserver:v1.16.1],SizeBytes:217083230,},ContainerImage{Names:[k8s.gcr.io/kube-controller-manager@sha256:36259393d3c7cb84a6420db94dccfc75faa8adc9841142467691b7123ab4e8b8 k8s.gcr.io/kube-controller-manager:v1.16.1],SizeBytes:163318238,},ContainerImage{Names:[weaveworks/weave-kube@sha256:8fea236b8e64192c454e459b40381bd48795bd54d791fa684d818afdc12bd100 weaveworks/weave-kube:2.5.2],SizeBytes:148150868,},ContainerImage{Names:[httpd@sha256:addd70e4ee83f3bc9a4c1c7c41e37927ba47faf639312fc936df3afad7926f5a httpd:2.4.39-alpine],SizeBytes:126894770,},ContainerImage{Names:[httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060 httpd:2.4.38-alpine],SizeBytes:123781643,},ContainerImage{Names:[k8s.gcr.io/kube-scheduler@sha256:c51d0cff4c90fd1ed1e0c62509c4bee2035f8815c68ed355d3643f0db3d084a9 k8s.gcr.io/kube-scheduler:v1.16.1],SizeBytes:87269918,},ContainerImage{Names:[k8s.gcr.io/kube-proxy@sha256:adb4d547241d08bbb25a928b7356b9f122c4a2e81abfe47aebdd659097e79dbc k8s.gcr.io/kube-proxy:v1.16.1],SizeBytes:86061020,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:4057a5580c7b59c4fe10d8ab2732c9dec35eea80fd41f7bafc7bd5acc7edf727 gcr.io/kubernetes-e2e-test-images/agnhost:2.6],SizeBytes:57345321,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5 gcr.io/kubernetes-e2e-test-images/agnhost:2.8],SizeBytes:52800335,},ContainerImage{Names:[weaveworks/weave-npc@sha256:56c93a359d54107558720a2859b83cb28a31c70c82a1aaa3dc4704e6c62e3b15 weaveworks/weave-npc:2.5.2],SizeBytes:49569458,},ContainerImage{Names:[k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5 k8s.gcr.io/coredns:1.6.2],SizeBytes:44100963,},ContainerImage{Names:[redis@sha256:50899ea1ceed33fa03232f3ac57578a424faa1742c1ac9c7a7bdb95cdf19b858 redis:5.0.5-alpine],SizeBytes:29331594,},ContainerImage{Names:[nginx@sha256:485b610fefec7ff6c463ced9623314a04ed67e3945b9c08d7e53a47f6d108dc7 nginx:1.14-alpine],SizeBytes:16032814,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/nautilus@sha256:33a732d4c42a266912a5091598a0f07653c9134db4b8d571690d8afd509e0bfc gcr.io/kubernetes-e2e-test-images/nautilus:1.0],SizeBytes:4753501,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/kitten@sha256:bcbc4875c982ab39aa7c4f6acf4a287f604e996d9f34a3fbda8c3d1a7457d1f6 gcr.io/kubernetes-e2e-test-images/kitten:1.0],SizeBytes:4747037,},ContainerImage{Names:[gcr.io/kubernetes-e2e-test-images/mounttest@sha256:c0bd6f0755f42af09a68c9a47fb993136588a76b3200ec305796b60d629d85d2 gcr.io/kubernetes-e2e-test-images/mounttest:1.0],SizeBytes:1563521,},ContainerImage{Names:[busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796 busybox:1.29],SizeBytes:1154361,},ContainerImage{Names:[k8s.gcr.io/pause@sha256:f78411e19d84a252e53bff71a4407a5686c46983a2c2eeed83929b888179acea k8s.gcr.io/pause:3.1],SizeBytes:742472,},ContainerImage{Names:[kubernetes/pause@sha256:b31bfb4d0213f254d361e0079deaaebefa4f82ba7aa76ef82e90b4935ad5b105 kubernetes/pause:latest],SizeBytes:239840,},},VolumesInUse:[],VolumesAttached:[]AttachedVolume{},Config:nil,},}
Dec 17 23:43:19.382: INFO: 
Logging kubelet events for node jerma-server-4b75xjbddvit
Dec 17 23:43:19.389: INFO: 
Logging pods the kubelet thinks is on node jerma-server-4b75xjbddvit
Dec 17 23:43:19.432: INFO: coredns-5644d7b6d9-9sj58 started at 2019-12-14 15:12:12 +0000 UTC (0+1 container statuses recorded)
Dec 17 23:43:19.432: INFO: 	Container coredns ready: true, restart count 0
Dec 17 23:43:19.432: INFO: kube-scheduler-jerma-server-4b75xjbddvit started at 2019-10-12 13:28:42 +0000 UTC (0+1 container statuses recorded)
Dec 17 23:43:19.432: INFO: 	Container kube-scheduler ready: true, restart count 11
Dec 17 23:43:19.432: INFO: kube-proxy-bdcvr started at 2019-12-13 09:08:20 +0000 UTC (0+1 container statuses recorded)
Dec 17 23:43:19.432: INFO: 	Container kube-proxy ready: true, restart count 0
Dec 17 23:43:19.432: INFO: coredns-5644d7b6d9-xvlxj started at 2019-12-14 16:49:52 +0000 UTC (0+1 container statuses recorded)
Dec 17 23:43:19.432: INFO: 	Container coredns ready: true, restart count 0
Dec 17 23:43:19.432: INFO: etcd-jerma-server-4b75xjbddvit started at 2019-10-12 13:28:37 +0000 UTC (0+1 container statuses recorded)
Dec 17 23:43:19.432: INFO: 	Container etcd ready: true, restart count 1
Dec 17 23:43:19.432: INFO: kube-controller-manager-jerma-server-4b75xjbddvit started at 2019-10-12 13:28:40 +0000 UTC (0+1 container statuses recorded)
Dec 17 23:43:19.432: INFO: 	Container kube-controller-manager ready: true, restart count 8
Dec 17 23:43:19.432: INFO: kube-apiserver-jerma-server-4b75xjbddvit started at 2019-10-12 13:28:38 +0000 UTC (0+1 container statuses recorded)
Dec 17 23:43:19.432: INFO: 	Container kube-apiserver ready: true, restart count 1
Dec 17 23:43:19.432: INFO: coredns-5644d7b6d9-n9kkw started at 2019-11-10 16:39:08 +0000 UTC (0+0 container statuses recorded)
Dec 17 23:43:19.432: INFO: coredns-5644d7b6d9-rqwzj started at 2019-11-10 18:03:38 +0000 UTC (0+0 container statuses recorded)
Dec 17 23:43:19.432: INFO: weave-net-gsjjk started at 2019-12-13 09:16:56 +0000 UTC (0+2 container statuses recorded)
Dec 17 23:43:19.432: INFO: 	Container weave ready: true, restart count 0
Dec 17 23:43:19.432: INFO: 	Container weave-npc ready: true, restart count 0
W1217 23:43:19.439987       8 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled.
Dec 17 23:43:19.493: INFO: 
Latency metrics for node jerma-server-4b75xjbddvit
Dec 17 23:43:19.493: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "statefulset-5711" for this suite.
Dec 17 23:43:25.560: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:43:25.700: INFO: namespace statefulset-5711 deletion completed in 6.174073269s

• Failure [330.369 seconds]
[sig-apps] StatefulSet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  [k8s.io] Basic StatefulSet functionality [StatefulSetBasic]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
    Should recreate evicted statefulset [Conformance] [It]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698

    Dec 17 23:43:06.834: Timed out after 300.001s.
    Expected
        <*errors.errorString | 0xc00078f910>: {
            s: "pod ss-0 is not in running phase: Pending",
        }
    to be nil

    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:760
------------------------------
SSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-storage] ConfigMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:43:25.702: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename configmap
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name configmap-test-volume-map-d05bd610-4a95-4e20-b759-8764b720be9e
STEP: Creating a pod to test consume configMaps
Dec 17 23:43:25.998: INFO: Waiting up to 5m0s for pod "pod-configmaps-55048d0d-73c8-4897-a873-3b0d5f61341b" in namespace "configmap-9048" to be "success or failure"
Dec 17 23:43:26.038: INFO: Pod "pod-configmaps-55048d0d-73c8-4897-a873-3b0d5f61341b": Phase="Pending", Reason="", readiness=false. Elapsed: 39.795127ms
Dec 17 23:43:28.049: INFO: Pod "pod-configmaps-55048d0d-73c8-4897-a873-3b0d5f61341b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050046066s
Dec 17 23:43:30.057: INFO: Pod "pod-configmaps-55048d0d-73c8-4897-a873-3b0d5f61341b": Phase="Pending", Reason="", readiness=false. Elapsed: 4.058372957s
Dec 17 23:43:32.064: INFO: Pod "pod-configmaps-55048d0d-73c8-4897-a873-3b0d5f61341b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.065733052s
Dec 17 23:43:34.081: INFO: Pod "pod-configmaps-55048d0d-73c8-4897-a873-3b0d5f61341b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.082146081s
Dec 17 23:43:36.092: INFO: Pod "pod-configmaps-55048d0d-73c8-4897-a873-3b0d5f61341b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.093175828s
STEP: Saw pod success
Dec 17 23:43:36.092: INFO: Pod "pod-configmaps-55048d0d-73c8-4897-a873-3b0d5f61341b" satisfied condition "success or failure"
Dec 17 23:43:36.098: INFO: Trying to get logs from node jerma-node pod pod-configmaps-55048d0d-73c8-4897-a873-3b0d5f61341b container configmap-volume-test: 
STEP: delete the pod
Dec 17 23:43:36.356: INFO: Waiting for pod pod-configmaps-55048d0d-73c8-4897-a873-3b0d5f61341b to disappear
Dec 17 23:43:36.401: INFO: Pod pod-configmaps-55048d0d-73c8-4897-a873-3b0d5f61341b no longer exists
[AfterEach] [sig-storage] ConfigMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:43:36.402: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "configmap-9048" for this suite.
Dec 17 23:43:42.442: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:43:42.604: INFO: namespace configmap-9048 deletion completed in 6.188067623s

• [SLOW TEST:16.903 seconds]
[sig-storage] ConfigMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:34
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] DNS 
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:43:42.607: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename dns
STEP: Waiting for a default service account to be provisioned in namespace
[It] should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a test externalName service
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7303.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7303.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7303.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7303.svc.cluster.local; sleep 1; done

STEP: creating a pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 17 23:43:54.804: INFO: File wheezy_udp@dns-test-service-3.dns-7303.svc.cluster.local from pod  dns-7303/dns-test-8acc9662-7474-4336-8e39-a8c298c8f153 contains '' instead of 'foo.example.com.'
Dec 17 23:43:54.813: INFO: File jessie_udp@dns-test-service-3.dns-7303.svc.cluster.local from pod  dns-7303/dns-test-8acc9662-7474-4336-8e39-a8c298c8f153 contains '' instead of 'foo.example.com.'
Dec 17 23:43:54.813: INFO: Lookups using dns-7303/dns-test-8acc9662-7474-4336-8e39-a8c298c8f153 failed for: [wheezy_udp@dns-test-service-3.dns-7303.svc.cluster.local jessie_udp@dns-test-service-3.dns-7303.svc.cluster.local]

Dec 17 23:43:59.946: INFO: DNS probes using dns-test-8acc9662-7474-4336-8e39-a8c298c8f153 succeeded

STEP: deleting the pod
STEP: changing the externalName to bar.example.com
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7303.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-7303.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7303.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-7303.svc.cluster.local; sleep 1; done

STEP: creating a second pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 17 23:44:18.236: INFO: File wheezy_udp@dns-test-service-3.dns-7303.svc.cluster.local from pod  dns-7303/dns-test-1d855cdf-4de7-400c-818f-9bf10869c72e contains '' instead of 'bar.example.com.'
Dec 17 23:44:18.246: INFO: File jessie_udp@dns-test-service-3.dns-7303.svc.cluster.local from pod  dns-7303/dns-test-1d855cdf-4de7-400c-818f-9bf10869c72e contains '' instead of 'bar.example.com.'
Dec 17 23:44:18.247: INFO: Lookups using dns-7303/dns-test-1d855cdf-4de7-400c-818f-9bf10869c72e failed for: [wheezy_udp@dns-test-service-3.dns-7303.svc.cluster.local jessie_udp@dns-test-service-3.dns-7303.svc.cluster.local]

Dec 17 23:44:23.259: INFO: File wheezy_udp@dns-test-service-3.dns-7303.svc.cluster.local from pod  dns-7303/dns-test-1d855cdf-4de7-400c-818f-9bf10869c72e contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 17 23:44:23.265: INFO: File jessie_udp@dns-test-service-3.dns-7303.svc.cluster.local from pod  dns-7303/dns-test-1d855cdf-4de7-400c-818f-9bf10869c72e contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 17 23:44:23.265: INFO: Lookups using dns-7303/dns-test-1d855cdf-4de7-400c-818f-9bf10869c72e failed for: [wheezy_udp@dns-test-service-3.dns-7303.svc.cluster.local jessie_udp@dns-test-service-3.dns-7303.svc.cluster.local]

Dec 17 23:44:28.260: INFO: File wheezy_udp@dns-test-service-3.dns-7303.svc.cluster.local from pod  dns-7303/dns-test-1d855cdf-4de7-400c-818f-9bf10869c72e contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 17 23:44:28.268: INFO: File jessie_udp@dns-test-service-3.dns-7303.svc.cluster.local from pod  dns-7303/dns-test-1d855cdf-4de7-400c-818f-9bf10869c72e contains 'foo.example.com.
' instead of 'bar.example.com.'
Dec 17 23:44:28.268: INFO: Lookups using dns-7303/dns-test-1d855cdf-4de7-400c-818f-9bf10869c72e failed for: [wheezy_udp@dns-test-service-3.dns-7303.svc.cluster.local jessie_udp@dns-test-service-3.dns-7303.svc.cluster.local]

Dec 17 23:44:33.274: INFO: DNS probes using dns-test-1d855cdf-4de7-400c-818f-9bf10869c72e succeeded

STEP: deleting the pod
STEP: changing the service to type=ClusterIP
STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7303.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-7303.svc.cluster.local; sleep 1; done

STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-7303.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-7303.svc.cluster.local; sleep 1; done

STEP: creating a third pod to probe DNS
STEP: submitting the pod to kubernetes
STEP: retrieving the pod
STEP: looking for the results for each expected name from probers
Dec 17 23:44:49.737: INFO: File wheezy_udp@dns-test-service-3.dns-7303.svc.cluster.local from pod  dns-7303/dns-test-97289574-21eb-4853-b5d6-619e3b408485 contains '' instead of '10.102.123.52'
Dec 17 23:44:49.743: INFO: File jessie_udp@dns-test-service-3.dns-7303.svc.cluster.local from pod  dns-7303/dns-test-97289574-21eb-4853-b5d6-619e3b408485 contains '' instead of '10.102.123.52'
Dec 17 23:44:49.743: INFO: Lookups using dns-7303/dns-test-97289574-21eb-4853-b5d6-619e3b408485 failed for: [wheezy_udp@dns-test-service-3.dns-7303.svc.cluster.local jessie_udp@dns-test-service-3.dns-7303.svc.cluster.local]

Dec 17 23:44:54.761: INFO: DNS probes using dns-test-97289574-21eb-4853-b5d6-619e3b408485 succeeded

STEP: deleting the pod
STEP: deleting the test externalName service
[AfterEach] [sig-network] DNS
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:44:54.920: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "dns-7303" for this suite.
Dec 17 23:45:03.051: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:45:03.215: INFO: namespace dns-7303 deletion completed in 8.232830358s

• [SLOW TEST:80.608 seconds]
[sig-network] DNS
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should provide DNS for ExternalName services [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:45:03.216: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Counting existing ResourceQuota
STEP: Creating a ResourceQuota
STEP: Ensuring resource quota status is calculated
STEP: Creating a ReplicaSet
STEP: Ensuring resource quota status captures replicaset creation
STEP: Deleting a ReplicaSet
STEP: Ensuring resource quota status released usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:45:14.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-3372" for this suite.
Dec 17 23:45:20.463: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:45:20.651: INFO: namespace resourcequota-3372 deletion completed in 6.246061309s

• [SLOW TEST:17.436 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should create a ResourceQuota and capture the life of a replica set. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSS
------------------------------
[sig-storage] EmptyDir volumes 
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:45:20.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename emptydir
STEP: Waiting for a default service account to be provisioned in namespace
[It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test emptydir 0777 on node default medium
Dec 17 23:45:20.822: INFO: Waiting up to 5m0s for pod "pod-36a6dea5-d4ba-4601-a898-04b68fe81dd0" in namespace "emptydir-6191" to be "success or failure"
Dec 17 23:45:20.970: INFO: Pod "pod-36a6dea5-d4ba-4601-a898-04b68fe81dd0": Phase="Pending", Reason="", readiness=false. Elapsed: 146.746253ms
Dec 17 23:45:22.977: INFO: Pod "pod-36a6dea5-d4ba-4601-a898-04b68fe81dd0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.154345131s
Dec 17 23:45:25.037: INFO: Pod "pod-36a6dea5-d4ba-4601-a898-04b68fe81dd0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.213805328s
Dec 17 23:45:27.057: INFO: Pod "pod-36a6dea5-d4ba-4601-a898-04b68fe81dd0": Phase="Pending", Reason="", readiness=false. Elapsed: 6.233579074s
Dec 17 23:45:29.063: INFO: Pod "pod-36a6dea5-d4ba-4601-a898-04b68fe81dd0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.240215225s
STEP: Saw pod success
Dec 17 23:45:29.063: INFO: Pod "pod-36a6dea5-d4ba-4601-a898-04b68fe81dd0" satisfied condition "success or failure"
Dec 17 23:45:29.066: INFO: Trying to get logs from node jerma-node pod pod-36a6dea5-d4ba-4601-a898-04b68fe81dd0 container test-container: 
STEP: delete the pod
Dec 17 23:45:29.126: INFO: Waiting for pod pod-36a6dea5-d4ba-4601-a898-04b68fe81dd0 to disappear
Dec 17 23:45:29.159: INFO: Pod pod-36a6dea5-d4ba-4601-a898-04b68fe81dd0 no longer exists
[AfterEach] [sig-storage] EmptyDir volumes
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:45:29.160: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "emptydir-6191" for this suite.
Dec 17 23:45:35.195: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:45:35.339: INFO: namespace emptydir-6191 deletion completed in 6.171238586s

• [SLOW TEST:14.686 seconds]
[sig-storage] EmptyDir volumes
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:41
  should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[sig-storage] Downward API volume 
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:45:35.340: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 23:45:35.533: INFO: Waiting up to 5m0s for pod "downwardapi-volume-a03df190-13b7-44a1-bfa6-4666f66bf713" in namespace "downward-api-4735" to be "success or failure"
Dec 17 23:45:35.596: INFO: Pod "downwardapi-volume-a03df190-13b7-44a1-bfa6-4666f66bf713": Phase="Pending", Reason="", readiness=false. Elapsed: 62.955454ms
Dec 17 23:45:37.649: INFO: Pod "downwardapi-volume-a03df190-13b7-44a1-bfa6-4666f66bf713": Phase="Pending", Reason="", readiness=false. Elapsed: 2.115904681s
Dec 17 23:45:39.660: INFO: Pod "downwardapi-volume-a03df190-13b7-44a1-bfa6-4666f66bf713": Phase="Pending", Reason="", readiness=false. Elapsed: 4.126878723s
Dec 17 23:45:41.672: INFO: Pod "downwardapi-volume-a03df190-13b7-44a1-bfa6-4666f66bf713": Phase="Pending", Reason="", readiness=false. Elapsed: 6.138802523s
Dec 17 23:45:43.692: INFO: Pod "downwardapi-volume-a03df190-13b7-44a1-bfa6-4666f66bf713": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.158822854s
STEP: Saw pod success
Dec 17 23:45:43.692: INFO: Pod "downwardapi-volume-a03df190-13b7-44a1-bfa6-4666f66bf713" satisfied condition "success or failure"
Dec 17 23:45:43.700: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-a03df190-13b7-44a1-bfa6-4666f66bf713 container client-container: 
STEP: delete the pod
Dec 17 23:45:43.805: INFO: Waiting for pod downwardapi-volume-a03df190-13b7-44a1-bfa6-4666f66bf713 to disappear
Dec 17 23:45:43.814: INFO: Pod downwardapi-volume-a03df190-13b7-44a1-bfa6-4666f66bf713 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:45:43.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-4735" for this suite.
Dec 17 23:45:49.884: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:45:49.988: INFO: namespace downward-api-4735 deletion completed in 6.157494435s

• [SLOW TEST:14.648 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:45:49.988: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 23:45:50.159: INFO: Waiting up to 5m0s for pod "downwardapi-volume-35306d52-7210-409d-956d-c6fe3efe363e" in namespace "downward-api-9220" to be "success or failure"
Dec 17 23:45:50.216: INFO: Pod "downwardapi-volume-35306d52-7210-409d-956d-c6fe3efe363e": Phase="Pending", Reason="", readiness=false. Elapsed: 57.05093ms
Dec 17 23:45:52.227: INFO: Pod "downwardapi-volume-35306d52-7210-409d-956d-c6fe3efe363e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067293362s
Dec 17 23:45:54.234: INFO: Pod "downwardapi-volume-35306d52-7210-409d-956d-c6fe3efe363e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.074270737s
Dec 17 23:45:56.247: INFO: Pod "downwardapi-volume-35306d52-7210-409d-956d-c6fe3efe363e": Phase="Running", Reason="", readiness=true. Elapsed: 6.087173281s
Dec 17 23:45:58.251: INFO: Pod "downwardapi-volume-35306d52-7210-409d-956d-c6fe3efe363e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092100704s
STEP: Saw pod success
Dec 17 23:45:58.252: INFO: Pod "downwardapi-volume-35306d52-7210-409d-956d-c6fe3efe363e" satisfied condition "success or failure"
Dec 17 23:45:58.254: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-35306d52-7210-409d-956d-c6fe3efe363e container client-container: 
STEP: delete the pod
Dec 17 23:45:58.489: INFO: Waiting for pod downwardapi-volume-35306d52-7210-409d-956d-c6fe3efe363e to disappear
Dec 17 23:45:58.497: INFO: Pod downwardapi-volume-35306d52-7210-409d-956d-c6fe3efe363e no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:45:58.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-9220" for this suite.
Dec 17 23:46:04.537: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:46:04.673: INFO: namespace downward-api-9220 deletion completed in 6.166860177s

• [SLOW TEST:14.685 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Secrets 
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:46:04.674: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating secret secrets-3567/secret-test-2ffb0514-8061-4eec-af7d-b7733a133d1b
STEP: Creating a pod to test consume secrets
Dec 17 23:46:04.753: INFO: Waiting up to 5m0s for pod "pod-configmaps-10c095df-20d1-40be-8dfd-d38477d51c2c" in namespace "secrets-3567" to be "success or failure"
Dec 17 23:46:04.791: INFO: Pod "pod-configmaps-10c095df-20d1-40be-8dfd-d38477d51c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 37.335369ms
Dec 17 23:46:06.800: INFO: Pod "pod-configmaps-10c095df-20d1-40be-8dfd-d38477d51c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046071859s
Dec 17 23:46:08.834: INFO: Pod "pod-configmaps-10c095df-20d1-40be-8dfd-d38477d51c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.080427954s
Dec 17 23:46:10.841: INFO: Pod "pod-configmaps-10c095df-20d1-40be-8dfd-d38477d51c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 6.087469106s
Dec 17 23:46:12.853: INFO: Pod "pod-configmaps-10c095df-20d1-40be-8dfd-d38477d51c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 8.099229215s
Dec 17 23:46:14.864: INFO: Pod "pod-configmaps-10c095df-20d1-40be-8dfd-d38477d51c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 10.110460538s
Dec 17 23:46:16.931: INFO: Pod "pod-configmaps-10c095df-20d1-40be-8dfd-d38477d51c2c": Phase="Pending", Reason="", readiness=false. Elapsed: 12.177133931s
Dec 17 23:46:18.952: INFO: Pod "pod-configmaps-10c095df-20d1-40be-8dfd-d38477d51c2c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 14.198189109s
STEP: Saw pod success
Dec 17 23:46:18.952: INFO: Pod "pod-configmaps-10c095df-20d1-40be-8dfd-d38477d51c2c" satisfied condition "success or failure"
Dec 17 23:46:18.962: INFO: Trying to get logs from node jerma-node pod pod-configmaps-10c095df-20d1-40be-8dfd-d38477d51c2c container env-test: 
STEP: delete the pod
Dec 17 23:46:19.132: INFO: Waiting for pod pod-configmaps-10c095df-20d1-40be-8dfd-d38477d51c2c to disappear
Dec 17 23:46:19.139: INFO: Pod pod-configmaps-10c095df-20d1-40be-8dfd-d38477d51c2c no longer exists
[AfterEach] [sig-api-machinery] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:46:19.139: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-3567" for this suite.
Dec 17 23:46:25.241: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:46:25.394: INFO: namespace secrets-3567 deletion completed in 6.185656405s

• [SLOW TEST:20.721 seconds]
[sig-api-machinery] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:32
  should be consumable via the environment [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:46:25.395: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name projected-configmap-test-volume-map-d304a36a-1a86-4a48-b748-31bd6e2c97cd
STEP: Creating a pod to test consume configMaps
Dec 17 23:46:25.595: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-4326a2a1-d697-4db5-97d0-2c0cc7bc885f" in namespace "projected-2425" to be "success or failure"
Dec 17 23:46:25.616: INFO: Pod "pod-projected-configmaps-4326a2a1-d697-4db5-97d0-2c0cc7bc885f": Phase="Pending", Reason="", readiness=false. Elapsed: 21.212593ms
Dec 17 23:46:27.624: INFO: Pod "pod-projected-configmaps-4326a2a1-d697-4db5-97d0-2c0cc7bc885f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029663701s
Dec 17 23:46:29.638: INFO: Pod "pod-projected-configmaps-4326a2a1-d697-4db5-97d0-2c0cc7bc885f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.043051101s
Dec 17 23:46:31.647: INFO: Pod "pod-projected-configmaps-4326a2a1-d697-4db5-97d0-2c0cc7bc885f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.052559951s
Dec 17 23:46:33.656: INFO: Pod "pod-projected-configmaps-4326a2a1-d697-4db5-97d0-2c0cc7bc885f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.061542559s
STEP: Saw pod success
Dec 17 23:46:33.657: INFO: Pod "pod-projected-configmaps-4326a2a1-d697-4db5-97d0-2c0cc7bc885f" satisfied condition "success or failure"
Dec 17 23:46:33.661: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-4326a2a1-d697-4db5-97d0-2c0cc7bc885f container projected-configmap-volume-test: 
STEP: delete the pod
Dec 17 23:46:33.734: INFO: Waiting for pod pod-projected-configmaps-4326a2a1-d697-4db5-97d0-2c0cc7bc885f to disappear
Dec 17 23:46:33.847: INFO: Pod pod-projected-configmaps-4326a2a1-d697-4db5-97d0-2c0cc7bc885f no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:46:33.848: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2425" for this suite.
Dec 17 23:46:39.890: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:46:40.017: INFO: namespace projected-2425 deletion completed in 6.156155903s

• [SLOW TEST:14.622 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSS
------------------------------
[sig-storage] Projected configMap 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:46:40.019: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating configMap with name projected-configmap-test-volume-map-545ce8b1-9666-4e09-b451-13787d04df5b
STEP: Creating a pod to test consume configMaps
Dec 17 23:46:40.161: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-3724c72e-c5fb-492a-bde8-eca41c95c3ee" in namespace "projected-8129" to be "success or failure"
Dec 17 23:46:40.177: INFO: Pod "pod-projected-configmaps-3724c72e-c5fb-492a-bde8-eca41c95c3ee": Phase="Pending", Reason="", readiness=false. Elapsed: 15.698359ms
Dec 17 23:46:42.192: INFO: Pod "pod-projected-configmaps-3724c72e-c5fb-492a-bde8-eca41c95c3ee": Phase="Pending", Reason="", readiness=false. Elapsed: 2.030773109s
Dec 17 23:46:44.216: INFO: Pod "pod-projected-configmaps-3724c72e-c5fb-492a-bde8-eca41c95c3ee": Phase="Pending", Reason="", readiness=false. Elapsed: 4.055084931s
Dec 17 23:46:46.226: INFO: Pod "pod-projected-configmaps-3724c72e-c5fb-492a-bde8-eca41c95c3ee": Phase="Pending", Reason="", readiness=false. Elapsed: 6.06537922s
Dec 17 23:46:48.234: INFO: Pod "pod-projected-configmaps-3724c72e-c5fb-492a-bde8-eca41c95c3ee": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.072893547s
STEP: Saw pod success
Dec 17 23:46:48.234: INFO: Pod "pod-projected-configmaps-3724c72e-c5fb-492a-bde8-eca41c95c3ee" satisfied condition "success or failure"
Dec 17 23:46:48.241: INFO: Trying to get logs from node jerma-node pod pod-projected-configmaps-3724c72e-c5fb-492a-bde8-eca41c95c3ee container projected-configmap-volume-test: 
STEP: delete the pod
Dec 17 23:46:48.288: INFO: Waiting for pod pod-projected-configmaps-3724c72e-c5fb-492a-bde8-eca41c95c3ee to disappear
Dec 17 23:46:48.349: INFO: Pod pod-projected-configmaps-3724c72e-c5fb-492a-bde8-eca41c95c3ee no longer exists
[AfterEach] [sig-storage] Projected configMap
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:46:48.349: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-8129" for this suite.
Dec 17 23:46:54.462: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:46:54.627: INFO: namespace projected-8129 deletion completed in 6.272685629s

• [SLOW TEST:14.608 seconds]
[sig-storage] Projected configMap
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] ResourceQuota 
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:46:54.630: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename resourcequota
STEP: Waiting for a default service account to be provisioned in namespace
[It] should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a ResourceQuota with terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a ResourceQuota with not terminating scope
STEP: Ensuring ResourceQuota status is calculated
STEP: Creating a long running pod
STEP: Ensuring resource quota with not terminating scope captures the pod usage
STEP: Ensuring resource quota with terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
STEP: Creating a terminating pod
STEP: Ensuring resource quota with terminating scope captures the pod usage
STEP: Ensuring resource quota with not terminating scope ignored the pod usage
STEP: Deleting the pod
STEP: Ensuring resource quota status released the pod usage
[AfterEach] [sig-api-machinery] ResourceQuota
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:47:10.999: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "resourcequota-5301" for this suite.
Dec 17 23:47:17.036: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:47:17.203: INFO: namespace resourcequota-5301 deletion completed in 6.198689893s

• [SLOW TEST:22.573 seconds]
[sig-api-machinery] ResourceQuota
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should verify ResourceQuota with terminating scopes. [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSS
------------------------------
[sig-storage] Downward API volume 
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:47:17.204: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename downward-api
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40
[It] should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 23:47:17.387: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1a72df50-61f8-4964-bfd2-fe2bd912f765" in namespace "downward-api-1786" to be "success or failure"
Dec 17 23:47:17.396: INFO: Pod "downwardapi-volume-1a72df50-61f8-4964-bfd2-fe2bd912f765": Phase="Pending", Reason="", readiness=false. Elapsed: 9.150551ms
Dec 17 23:47:19.407: INFO: Pod "downwardapi-volume-1a72df50-61f8-4964-bfd2-fe2bd912f765": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019431141s
Dec 17 23:47:21.415: INFO: Pod "downwardapi-volume-1a72df50-61f8-4964-bfd2-fe2bd912f765": Phase="Pending", Reason="", readiness=false. Elapsed: 4.02802069s
Dec 17 23:47:23.424: INFO: Pod "downwardapi-volume-1a72df50-61f8-4964-bfd2-fe2bd912f765": Phase="Pending", Reason="", readiness=false. Elapsed: 6.036446889s
Dec 17 23:47:25.432: INFO: Pod "downwardapi-volume-1a72df50-61f8-4964-bfd2-fe2bd912f765": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.045022149s
STEP: Saw pod success
Dec 17 23:47:25.432: INFO: Pod "downwardapi-volume-1a72df50-61f8-4964-bfd2-fe2bd912f765" satisfied condition "success or failure"
Dec 17 23:47:25.436: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-1a72df50-61f8-4964-bfd2-fe2bd912f765 container client-container: 
STEP: delete the pod
Dec 17 23:47:25.471: INFO: Waiting for pod downwardapi-volume-1a72df50-61f8-4964-bfd2-fe2bd912f765 to disappear
Dec 17 23:47:25.478: INFO: Pod downwardapi-volume-1a72df50-61f8-4964-bfd2-fe2bd912f765 no longer exists
[AfterEach] [sig-storage] Downward API volume
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:47:25.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "downward-api-1786" for this suite.
Dec 17 23:47:31.739: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:47:31.925: INFO: namespace downward-api-1786 deletion completed in 6.37751221s

• [SLOW TEST:14.721 seconds]
[sig-storage] Downward API volume
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35
  should provide container's cpu request [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:47:31.926: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
[It] should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6931
STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service
STEP: creating service externalsvc in namespace services-6931
STEP: creating replication controller externalsvc in namespace services-6931
I1217 23:47:32.416421       8 runners.go:184] Created replication controller with name: externalsvc, namespace: services-6931, replica count: 2
I1217 23:47:35.468948       8 runners.go:184] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 23:47:38.470763       8 runners.go:184] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 23:47:41.472258       8 runners.go:184] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 23:47:44.473735       8 runners.go:184] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 23:47:47.474622       8 runners.go:184] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
STEP: changing the ClusterIP service to type=ExternalName
Dec 17 23:47:47.543: INFO: Creating new exec pod
Dec 17 23:47:55.619: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6931 execpodjb5n6 -- /bin/sh -x -c nslookup clusterip-service'
Dec 17 23:47:56.109: INFO: stderr: "+ nslookup clusterip-service\n"
Dec 17 23:47:56.109: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6931.svc.cluster.local\tcanonical name = externalsvc.services-6931.svc.cluster.local.\nName:\texternalsvc.services-6931.svc.cluster.local\nAddress: 10.106.197.28\n\n"
STEP: deleting ReplicationController externalsvc in namespace services-6931, will wait for the garbage collector to delete the pods
Dec 17 23:47:56.218: INFO: Deleting ReplicationController externalsvc took: 53.333389ms
Dec 17 23:47:56.521: INFO: Terminating ReplicationController externalsvc pods took: 302.841504ms
Dec 17 23:48:05.382: INFO: Cleaning up the ClusterIP to ExternalName test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:48:05.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-6931" for this suite.
Dec 17 23:48:11.447: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:48:11.532: INFO: namespace services-6931 deletion completed in 6.111138199s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95

• [SLOW TEST:39.606 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ClusterIP to ExternalName [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSS
------------------------------
[sig-storage] Secrets 
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:48:11.532: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename secrets
STEP: Waiting for a default service account to be provisioned in namespace
[It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating secret with name secret-test-map-877df953-bab3-4b1b-a8a1-e5903777b216
STEP: Creating a pod to test consume secrets
Dec 17 23:48:11.683: INFO: Waiting up to 5m0s for pod "pod-secrets-b55e9233-b89a-47c7-b6a8-65cd42ea4ca8" in namespace "secrets-6156" to be "success or failure"
Dec 17 23:48:11.719: INFO: Pod "pod-secrets-b55e9233-b89a-47c7-b6a8-65cd42ea4ca8": Phase="Pending", Reason="", readiness=false. Elapsed: 35.825727ms
Dec 17 23:48:13.729: INFO: Pod "pod-secrets-b55e9233-b89a-47c7-b6a8-65cd42ea4ca8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.045164477s
Dec 17 23:48:15.734: INFO: Pod "pod-secrets-b55e9233-b89a-47c7-b6a8-65cd42ea4ca8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05062292s
Dec 17 23:48:17.742: INFO: Pod "pod-secrets-b55e9233-b89a-47c7-b6a8-65cd42ea4ca8": Phase="Pending", Reason="", readiness=false. Elapsed: 6.058134327s
Dec 17 23:48:19.748: INFO: Pod "pod-secrets-b55e9233-b89a-47c7-b6a8-65cd42ea4ca8": Phase="Pending", Reason="", readiness=false. Elapsed: 8.064773432s
Dec 17 23:48:21.757: INFO: Pod "pod-secrets-b55e9233-b89a-47c7-b6a8-65cd42ea4ca8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 10.073647609s
STEP: Saw pod success
Dec 17 23:48:21.757: INFO: Pod "pod-secrets-b55e9233-b89a-47c7-b6a8-65cd42ea4ca8" satisfied condition "success or failure"
Dec 17 23:48:21.762: INFO: Trying to get logs from node jerma-node pod pod-secrets-b55e9233-b89a-47c7-b6a8-65cd42ea4ca8 container secret-volume-test: 
STEP: delete the pod
Dec 17 23:48:22.055: INFO: Waiting for pod pod-secrets-b55e9233-b89a-47c7-b6a8-65cd42ea4ca8 to disappear
Dec 17 23:48:22.074: INFO: Pod pod-secrets-b55e9233-b89a-47c7-b6a8-65cd42ea4ca8 no longer exists
[AfterEach] [sig-storage] Secrets
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:48:22.074: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "secrets-6156" for this suite.
Dec 17 23:48:28.151: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:48:28.255: INFO: namespace secrets-6156 deletion completed in 6.170803734s

• [SLOW TEST:16.723 seconds]
[sig-storage] Secrets
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:35
  should be consumable from pods in volume with mappings [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-network] Services 
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:48:28.256: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename services
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:91
[It] should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating a service externalname-service with the type=ExternalName in namespace services-1436
STEP: changing the ExternalName service to type=NodePort
STEP: creating replication controller externalname-service in namespace services-1436
I1217 23:48:28.498343       8 runners.go:184] Created replication controller with name: externalname-service, namespace: services-1436, replica count: 2
I1217 23:48:31.550184       8 runners.go:184] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 23:48:34.551398       8 runners.go:184] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 23:48:37.551994       8 runners.go:184] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
I1217 23:48:40.553510       8 runners.go:184] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady 
Dec 17 23:48:40.554: INFO: Creating new exec pod
Dec 17 23:48:49.631: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1436 execpodwpckx -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80'
Dec 17 23:48:50.036: INFO: stderr: "+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\n"
Dec 17 23:48:50.036: INFO: stdout: ""
Dec 17 23:48:50.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1436 execpodwpckx -- /bin/sh -x -c nc -zv -t -w 2 10.100.97.132 80'
Dec 17 23:48:50.389: INFO: stderr: "+ nc -zv -t -w 2 10.100.97.132 80\nConnection to 10.100.97.132 80 port [tcp/http] succeeded!\n"
Dec 17 23:48:50.390: INFO: stdout: ""
Dec 17 23:48:50.390: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1436 execpodwpckx -- /bin/sh -x -c nc -zv -t -w 2 10.96.2.170 31176'
Dec 17 23:48:50.785: INFO: stderr: "+ nc -zv -t -w 2 10.96.2.170 31176\nConnection to 10.96.2.170 31176 port [tcp/31176] succeeded!\n"
Dec 17 23:48:50.786: INFO: stdout: ""
Dec 17 23:48:50.787: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1436 execpodwpckx -- /bin/sh -x -c nc -zv -t -w 2 10.96.3.35 31176'
Dec 17 23:48:51.145: INFO: stderr: "+ nc -zv -t -w 2 10.96.3.35 31176\nConnection to 10.96.3.35 31176 port [tcp/31176] succeeded!\n"
Dec 17 23:48:51.145: INFO: stdout: ""
Dec 17 23:48:51.145: INFO: Cleaning up the ExternalName to NodePort test service
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:48:51.198: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "services-1436" for this suite.
Dec 17 23:48:57.264: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:48:57.436: INFO: namespace services-1436 deletion completed in 6.229183407s
[AfterEach] [sig-network] Services
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:95

• [SLOW TEST:29.181 seconds]
[sig-network] Services
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23
  should be able to change the type from ExternalName to NodePort [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] 
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:48:57.438: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-publish-openapi
STEP: Waiting for a default service account to be provisioned in namespace
[It] works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 23:48:57.597: INFO: >>> kubeConfig: /root/.kube/config
STEP: client-side validation (kubectl create and apply) allows request with known and required properties
Dec 17 23:49:01.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6491 create -f -'
Dec 17 23:49:05.526: INFO: stderr: ""
Dec 17 23:49:05.527: INFO: stdout: "e2e-test-crd-publish-openapi-4019-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Dec 17 23:49:05.527: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6491 delete e2e-test-crd-publish-openapi-4019-crds test-foo'
Dec 17 23:49:05.766: INFO: stderr: ""
Dec 17 23:49:05.767: INFO: stdout: "e2e-test-crd-publish-openapi-4019-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
Dec 17 23:49:05.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6491 apply -f -'
Dec 17 23:49:06.311: INFO: stderr: ""
Dec 17 23:49:06.311: INFO: stdout: "e2e-test-crd-publish-openapi-4019-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n"
Dec 17 23:49:06.312: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6491 delete e2e-test-crd-publish-openapi-4019-crds test-foo'
Dec 17 23:49:06.484: INFO: stderr: ""
Dec 17 23:49:06.484: INFO: stdout: "e2e-test-crd-publish-openapi-4019-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n"
STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema
Dec 17 23:49:06.485: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6491 create -f -'
Dec 17 23:49:06.809: INFO: rc: 1
Dec 17 23:49:06.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6491 apply -f -'
Dec 17 23:49:07.107: INFO: rc: 1
STEP: client-side validation (kubectl create and apply) rejects request without required properties
Dec 17 23:49:07.108: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6491 create -f -'
Dec 17 23:49:07.545: INFO: rc: 1
Dec 17 23:49:07.547: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-6491 apply -f -'
Dec 17 23:49:07.928: INFO: rc: 1
STEP: kubectl explain works to explain CR properties
Dec 17 23:49:07.929: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4019-crds'
Dec 17 23:49:08.304: INFO: stderr: ""
Dec 17 23:49:08.305: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4019-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n     Foo CRD for Testing\n\nFIELDS:\n   apiVersion\t\n     APIVersion defines the versioned schema of this representation of an\n     object. Servers should convert recognized schemas to the latest internal\n     value, and may reject unrecognized values. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n   kind\t\n     Kind is a string value representing the REST resource this object\n     represents. Servers may infer this from the endpoint the client submits\n     requests to. Cannot be updated. In CamelCase. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n   metadata\t\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   spec\t\n     Specification of Foo\n\n   status\t\n     Status of Foo\n\n"
STEP: kubectl explain works to explain CR properties recursively
Dec 17 23:49:08.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4019-crds.metadata'
Dec 17 23:49:08.785: INFO: stderr: ""
Dec 17 23:49:08.785: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4019-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n     Standard object's metadata. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n     ObjectMeta is metadata that all persisted resources must have, which\n     includes all objects users must create.\n\nFIELDS:\n   annotations\t\n     Annotations is an unstructured key value map stored with a resource that\n     may be set by external tools to store and retrieve arbitrary metadata. They\n     are not queryable and should be preserved when modifying objects. More\n     info: http://kubernetes.io/docs/user-guide/annotations\n\n   clusterName\t\n     The name of the cluster which the object belongs to. This is used to\n     distinguish resources with same name and namespace in different clusters.\n     This field is not set anywhere right now and apiserver is going to ignore\n     it if set in create or update request.\n\n   creationTimestamp\t\n     CreationTimestamp is a timestamp representing the server time when this\n     object was created. It is not guaranteed to be set in happens-before order\n     across separate operations. Clients may not set this value. It is\n     represented in RFC3339 form and is in UTC. Populated by the system.\n     Read-only. Null for lists. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   deletionGracePeriodSeconds\t\n     Number of seconds allowed for this object to gracefully terminate before it\n     will be removed from the system. Only set when deletionTimestamp is also\n     set. May only be shortened. Read-only.\n\n   deletionTimestamp\t\n     DeletionTimestamp is RFC 3339 date and time at which this resource will be\n     deleted. This field is set by the server when a graceful deletion is\n     requested by the user, and is not directly settable by a client. The\n     resource is expected to be deleted (no longer visible from resource lists,\n     and not reachable by name) after the time in this field, once the\n     finalizers list is empty. As long as the finalizers list contains items,\n     deletion is blocked. Once the deletionTimestamp is set, this value may not\n     be unset or be set further into the future, although it may be shortened or\n     the resource may be deleted prior to this time. For example, a user may\n     request that a pod is deleted in 30 seconds. The Kubelet will react by\n     sending a graceful termination signal to the containers in the pod. After\n     that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n     to the container and after cleanup, remove the pod from the API. In the\n     presence of network partitions, this object may still exist after this\n     timestamp, until an administrator or automated process can determine the\n     resource is fully terminated. If not set, graceful deletion of the object\n     has not been requested. Populated by the system when a graceful deletion is\n     requested. Read-only. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n   finalizers\t<[]string>\n     Must be empty before the object is deleted from the registry. Each entry is\n     an identifier for the responsible component that will remove the entry from\n     the list. If the deletionTimestamp of the object is non-nil, entries in\n     this list can only be removed.\n\n   generateName\t\n     GenerateName is an optional prefix, used by the server, to generate a\n     unique name ONLY IF the Name field has not been provided. If this field is\n     used, the name returned to the client will be different than the name\n     passed. This value will also be combined with a unique suffix. The provided\n     value has the same validation rules as the Name field, and may be truncated\n     by the length of the suffix required to make the value unique on the\n     server. If this field is specified and the generated name exists, the\n     server will NOT return a 409 - instead, it will either return 201 Created\n     or 500 with Reason ServerTimeout indicating a unique name could not be\n     found in the time allotted, and the client should retry (optionally after\n     the time indicated in the Retry-After header). Applied only if Name is not\n     specified. More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n   generation\t\n     A sequence number representing a specific generation of the desired state.\n     Populated by the system. Read-only.\n\n   labels\t\n     Map of string keys and values that can be used to organize and categorize\n     (scope and select) objects. May match selectors of replication controllers\n     and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n   managedFields\t<[]Object>\n     ManagedFields maps workflow-id and version to the set of fields that are\n     managed by that workflow. This is mostly for internal housekeeping, and\n     users typically shouldn't need to set or understand this field. A workflow\n     can be the user's name, a controller's name, or the name of a specific\n     apply path like \"ci-cd\". The set of fields is always in the version that\n     the workflow used when modifying the object.\n\n   name\t\n     Name must be unique within a namespace. Is required when creating\n     resources, although some resources may allow a client to request the\n     generation of an appropriate name automatically. Name is primarily intended\n     for creation idempotence and configuration definition. Cannot be updated.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n   namespace\t\n     Namespace defines the space within each name must be unique. An empty\n     namespace is equivalent to the \"default\" namespace, but \"default\" is the\n     canonical representation. Not all objects are required to be scoped to a\n     namespace - the value of this field for those objects will be empty. Must\n     be a DNS_LABEL. Cannot be updated. More info:\n     http://kubernetes.io/docs/user-guide/namespaces\n\n   ownerReferences\t<[]Object>\n     List of objects depended by this object. If ALL objects in the list have\n     been deleted, this object will be garbage collected. If this object is\n     managed by a controller, then an entry in this list will point to this\n     controller, with the controller field set to true. There cannot be more\n     than one managing controller.\n\n   resourceVersion\t\n     An opaque value that represents the internal version of this object that\n     can be used by clients to determine when objects have changed. May be used\n     for optimistic concurrency, change detection, and the watch operation on a\n     resource or set of resources. Clients must treat these values as opaque and\n     passed unmodified back to the server. They may only be valid for a\n     particular resource or set of resources. Populated by the system.\n     Read-only. Value must be treated as opaque by clients and . More info:\n     https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n   selfLink\t\n     SelfLink is a URL representing this object. Populated by the system.\n     Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n     release and the field is planned to be removed in 1.21 release.\n\n   uid\t\n     UID is the unique in time and space value for this object. It is typically\n     generated by the server on successful creation of a resource and is not\n     allowed to change on PUT operations. Populated by the system. Read-only.\n     More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n"
Dec 17 23:49:08.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4019-crds.spec'
Dec 17 23:49:09.305: INFO: stderr: ""
Dec 17 23:49:09.305: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4019-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n     Specification of Foo\n\nFIELDS:\n   bars\t<[]Object>\n     List of Bars and their specs.\n\n"
Dec 17 23:49:09.306: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4019-crds.spec.bars'
Dec 17 23:49:09.651: INFO: stderr: ""
Dec 17 23:49:09.651: INFO: stdout: "KIND:     E2e-test-crd-publish-openapi-4019-crd\nVERSION:  crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n     List of Bars and their specs.\n\nFIELDS:\n   age\t\n     Age of Bar.\n\n   bazs\t<[]string>\n     List of Bazs.\n\n   name\t -required-\n     Name of Bar.\n\n"
STEP: kubectl explain works to return error when explain is called on property that doesn't exist
Dec 17 23:49:09.652: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4019-crds.spec.bars2'
Dec 17 23:49:10.045: INFO: rc: 1
[AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:49:13.828: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-publish-openapi-6491" for this suite.
Dec 17 23:49:19.864: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:49:20.004: INFO: namespace crd-publish-openapi-6491 deletion completed in 6.167977481s

• [SLOW TEST:22.566 seconds]
[sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  works for CRD with validation schema [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[k8s.io] Probing container 
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:49:20.005: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename container-probe
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:52
[It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating pod test-webserver-659e7970-04b3-4b12-ad87-79caaa81de4b in namespace container-probe-9666
Dec 17 23:49:26.199: INFO: Started pod test-webserver-659e7970-04b3-4b12-ad87-79caaa81de4b in namespace container-probe-9666
STEP: checking the pod's current state and verifying that restartCount is present
Dec 17 23:49:26.208: INFO: Initial restart count of pod test-webserver-659e7970-04b3-4b12-ad87-79caaa81de4b is 0
STEP: deleting the pod
[AfterEach] [k8s.io] Probing container
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:53:27.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "container-probe-9666" for this suite.
Dec 17 23:53:33.870: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:53:33.992: INFO: namespace container-probe-9666 deletion completed in 6.206772522s

• [SLOW TEST:253.987 seconds]
[k8s.io] Probing container
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
[sig-cli] Kubectl client Kubectl run --rm job 
  should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:53:33.993: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should create a job from an image, then delete the job  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: executing a command with run --rm and attach with stdin
Dec 17 23:53:34.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-6270 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed''
Dec 17 23:53:42.009: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\n"
Dec 17 23:53:42.010: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n"
STEP: verifying the job e2e-test-rm-busybox-job was deleted
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:53:44.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6270" for this suite.
Dec 17 23:53:50.072: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:53:50.180: INFO: namespace kubectl-6270 deletion completed in 6.141806601s

• [SLOW TEST:16.188 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl run --rm job
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1751
    should create a job from an image, then delete the job  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:53:50.181: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a service in the namespace
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there is no service in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:53:56.864: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-2237" for this suite.
Dec 17 23:54:02.898: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:54:03.003: INFO: namespace namespaces-2237 deletion completed in 6.133232096s
STEP: Destroying namespace "nsdeletetest-7588" for this suite.
Dec 17 23:54:03.006: INFO: Namespace nsdeletetest-7588 was already deleted
STEP: Destroying namespace "nsdeletetest-1400" for this suite.
Dec 17 23:54:11.044: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:54:11.159: INFO: namespace nsdeletetest-1400 deletion completed in 8.153362012s

• [SLOW TEST:20.978 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all services are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSS
------------------------------
[sig-network] Networking Granular Checks: Pods 
  should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:54:11.160: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pod-network-test
STEP: Waiting for a default service account to be provisioned in namespace
[It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Performing setup for networking test in namespace pod-network-test-5696
STEP: creating a selector
STEP: Creating the service pods in kubernetes
Dec 17 23:54:11.211: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable
STEP: Creating test pods
Dec 17 23:54:51.530: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.32.0.4&port=8081&tries=1'] Namespace:pod-network-test-5696 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 17 23:54:51.530: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 23:54:51.908: INFO: Waiting for endpoints: map[]
Dec 17 23:54:51.920: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.44.0.2:8080/dial?request=hostName&protocol=udp&host=10.44.0.1&port=8081&tries=1'] Namespace:pod-network-test-5696 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false}
Dec 17 23:54:51.920: INFO: >>> kubeConfig: /root/.kube/config
Dec 17 23:54:52.205: INFO: Waiting for endpoints: map[]
[AfterEach] [sig-network] Networking
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:54:52.206: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pod-network-test-5696" for this suite.
Dec 17 23:55:06.244: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:55:06.354: INFO: namespace pod-network-test-5696 deletion completed in 14.139108238s

• [SLOW TEST:55.194 seconds]
[sig-network] Networking
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:25
  Granular Checks: Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:28
    should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SS
------------------------------
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch 
  watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:55:06.354: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename crd-watch
STEP: Waiting for a default service account to be provisioned in namespace
[It] watch on custom resource definition objects [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 23:55:06.487: INFO: >>> kubeConfig: /root/.kube/config
STEP: Creating first CR 
Dec 17 23:55:07.399: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-17T23:55:07Z generation:1 name:name1 resourceVersion:9163743 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d4504273-834e-480d-a9d8-d6c831f3fdc4] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Creating second CR
Dec 17 23:55:17.409: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-17T23:55:17Z generation:1 name:name2 resourceVersion:9163759 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:011ffdf4-a8ca-4e0c-b0d7-fdd8ffdb2c23] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying first CR
Dec 17 23:55:27.432: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-17T23:55:07Z generation:2 name:name1 resourceVersion:9163773 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d4504273-834e-480d-a9d8-d6c831f3fdc4] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Modifying second CR
Dec 17 23:55:37.444: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-17T23:55:17Z generation:2 name:name2 resourceVersion:9163786 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:011ffdf4-a8ca-4e0c-b0d7-fdd8ffdb2c23] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting first CR
Dec 17 23:55:47.473: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-17T23:55:07Z generation:2 name:name1 resourceVersion:9163800 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:d4504273-834e-480d-a9d8-d6c831f3fdc4] num:map[num1:9223372036854775807 num2:1000000]]}
STEP: Deleting second CR
Dec 17 23:55:57.490: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2019-12-17T23:55:17Z generation:2 name:name2 resourceVersion:9163815 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:011ffdf4-a8ca-4e0c-b0d7-fdd8ffdb2c23] num:map[num1:9223372036854775807 num2:1000000]]}
[AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:56:08.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "crd-watch-8765" for this suite.
Dec 17 23:56:14.060: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:56:14.155: INFO: namespace crd-watch-8765 deletion completed in 6.120133287s

• [SLOW TEST:67.801 seconds]
[sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  CustomResourceDefinition Watch
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:42
    watch on custom resource definition objects [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSS
------------------------------
[k8s.io] Kubelet when scheduling a busybox Pod with hostAliases 
  should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:56:14.156: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubelet-test
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37
[It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[AfterEach] [k8s.io] Kubelet
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:56:22.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubelet-test-7115" for this suite.
Dec 17 23:57:08.476: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:57:08.651: INFO: namespace kubelet-test-7115 deletion completed in 46.237045297s

• [SLOW TEST:54.495 seconds]
[k8s.io] Kubelet
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  when scheduling a busybox Pod with hostAliases
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:136
    should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition 
  getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:57:08.653: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename custom-resource-definition
STEP: Waiting for a default service account to be provisioned in namespace
[It] getting/updating/patching custom resource definition status sub-resource works  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 23:57:08.733: INFO: >>> kubeConfig: /root/.kube/config
[AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:57:09.374: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "custom-resource-definition-2602" for this suite.
Dec 17 23:57:15.448: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:57:15.585: INFO: namespace custom-resource-definition-2602 deletion completed in 6.187429124s

• [SLOW TEST:6.933 seconds]
[sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  Simple CustomResourceDefinition
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:42
    getting/updating/patching custom resource definition status sub-resource works  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSS
------------------------------
[k8s.io] InitContainer [NodeConformance] 
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:57:15.587: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename init-container
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:44
[It] should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the pod
Dec 17 23:57:15.696: INFO: PodSpec: initContainers in spec.initContainers
Dec 17 23:58:14.789: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-14dd209a-745b-4e26-a438-0e0b46ef7352", GenerateName:"", Namespace:"init-container-7171", SelfLink:"/api/v1/namespaces/init-container-7171/pods/pod-init-14dd209a-745b-4e26-a438-0e0b46ef7352", UID:"e2dbdb87-fd6b-47ce-b314-d3037afc4361", ResourceVersion:"9164082", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63712223835, loc:(*time.Location)(0x8492160)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"695995466"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-8r8cb", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0032721c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8r8cb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8r8cb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}, "memory":resource.Quantity{i:resource.int64Amount{value:52428800, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"52428800", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-8r8cb", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0050b4298), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-node", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc0020ce4e0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0050b4320)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0050b4340)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0050b4348), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0050b434c), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712223835, loc:(*time.Location)(0x8492160)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712223835, loc:(*time.Location)(0x8492160)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712223835, loc:(*time.Location)(0x8492160)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63712223835, loc:(*time.Location)(0x8492160)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"10.96.2.170", PodIP:"10.44.0.1", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.44.0.1"}}, StartTime:(*v1.Time)(0xc00191af60), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001722620)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc001722690)}, Ready:false, RestartCount:3, Image:"busybox:1.29", ImageID:"docker-pullable://busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"docker://e2b0a4d10105eacc1422ed07ec17a3120077074a7f07eeb2ba9b9ba08701d5ae", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00191b1a0), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc00191b060), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0050b43cf)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}}
[AfterEach] [k8s.io] InitContainer [NodeConformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:58:14.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "init-container-7171" for this suite.
Dec 17 23:58:42.894: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:58:43.001: INFO: namespace init-container-7171 deletion completed in 28.187745714s

• [SLOW TEST:87.414 seconds]
[k8s.io] InitContainer [NodeConformance]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should not start app containers if init containers fail on a RestartAlways pod [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSS
------------------------------
[sig-cli] Kubectl client Kubectl expose 
  should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:58:43.002: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename kubectl
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:225
[It] should create services for rc  [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating Redis RC
Dec 17 23:58:43.052: INFO: namespace kubectl-6497
Dec 17 23:58:43.052: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6497'
Dec 17 23:58:43.545: INFO: stderr: ""
Dec 17 23:58:43.545: INFO: stdout: "replicationcontroller/redis-master created\n"
STEP: Waiting for Redis master to start.
Dec 17 23:58:44.562: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 23:58:44.562: INFO: Found 0 / 1
Dec 17 23:58:45.557: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 23:58:45.557: INFO: Found 0 / 1
Dec 17 23:58:46.565: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 23:58:46.565: INFO: Found 0 / 1
Dec 17 23:58:47.557: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 23:58:47.558: INFO: Found 0 / 1
Dec 17 23:58:48.559: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 23:58:48.559: INFO: Found 0 / 1
Dec 17 23:58:49.557: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 23:58:49.557: INFO: Found 0 / 1
Dec 17 23:58:50.562: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 23:58:50.563: INFO: Found 0 / 1
Dec 17 23:58:51.555: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 23:58:51.556: INFO: Found 1 / 1
Dec 17 23:58:51.556: INFO: WaitFor completed with timeout 5m0s.  Pods found = 1 out of 1
Dec 17 23:58:51.562: INFO: Selector matched 1 pods for map[app:redis]
Dec 17 23:58:51.567: INFO: ForEach: Found 1 pods from the filter.  Now looping through them.
Dec 17 23:58:51.567: INFO: wait on redis-master startup in kubectl-6497 
Dec 17 23:58:51.567: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs redis-master-nb7jd redis-master --namespace=kubectl-6497'
Dec 17 23:58:51.747: INFO: stderr: ""
Dec 17 23:58:51.748: INFO: stdout: "1:C 17 Dec 2019 23:58:50.171 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo\n1:C 17 Dec 2019 23:58:50.171 # Redis version=5.0.5, bits=64, commit=00000000, modified=0, pid=1, just started\n1:C 17 Dec 2019 23:58:50.171 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf\n1:M 17 Dec 2019 23:58:50.174 * Running mode=standalone, port=6379.\n1:M 17 Dec 2019 23:58:50.174 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.\n1:M 17 Dec 2019 23:58:50.174 # Server initialized\n1:M 17 Dec 2019 23:58:50.174 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.\n1:M 17 Dec 2019 23:58:50.175 * Ready to accept connections\n"
STEP: exposing RC
Dec 17 23:58:51.748: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc redis-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-6497'
Dec 17 23:58:51.987: INFO: stderr: ""
Dec 17 23:58:51.988: INFO: stdout: "service/rm2 exposed\n"
Dec 17 23:58:51.995: INFO: Service rm2 in namespace kubectl-6497 found.
STEP: exposing service
Dec 17 23:58:54.012: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-6497'
Dec 17 23:58:54.317: INFO: stderr: ""
Dec 17 23:58:54.317: INFO: stdout: "service/rm3 exposed\n"
Dec 17 23:58:54.327: INFO: Service rm3 in namespace kubectl-6497 found.
[AfterEach] [sig-cli] Kubectl client
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:58:56.351: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "kubectl-6497" for this suite.
Dec 17 23:59:24.374: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:59:24.514: INFO: namespace kubectl-6497 deletion completed in 28.158778138s

• [SLOW TEST:41.512 seconds]
[sig-cli] Kubectl client
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23
  Kubectl expose
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1105
    should create services for rc  [Conformance]
    /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
S
------------------------------
[sig-storage] Projected downwardAPI 
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:59:24.515: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename projected
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40
[It] should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a pod to test downward API volume plugin
Dec 17 23:59:24.662: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4319eecf-6e91-4202-b568-3147ed4bfc0f" in namespace "projected-2901" to be "success or failure"
Dec 17 23:59:24.675: INFO: Pod "downwardapi-volume-4319eecf-6e91-4202-b568-3147ed4bfc0f": Phase="Pending", Reason="", readiness=false. Elapsed: 12.971497ms
Dec 17 23:59:26.683: INFO: Pod "downwardapi-volume-4319eecf-6e91-4202-b568-3147ed4bfc0f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021449847s
Dec 17 23:59:28.761: INFO: Pod "downwardapi-volume-4319eecf-6e91-4202-b568-3147ed4bfc0f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09923069s
Dec 17 23:59:30.855: INFO: Pod "downwardapi-volume-4319eecf-6e91-4202-b568-3147ed4bfc0f": Phase="Pending", Reason="", readiness=false. Elapsed: 6.193532335s
Dec 17 23:59:32.867: INFO: Pod "downwardapi-volume-4319eecf-6e91-4202-b568-3147ed4bfc0f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.204867647s
STEP: Saw pod success
Dec 17 23:59:32.867: INFO: Pod "downwardapi-volume-4319eecf-6e91-4202-b568-3147ed4bfc0f" satisfied condition "success or failure"
Dec 17 23:59:32.883: INFO: Trying to get logs from node jerma-node pod downwardapi-volume-4319eecf-6e91-4202-b568-3147ed4bfc0f container client-container: 
STEP: delete the pod
Dec 17 23:59:32.936: INFO: Waiting for pod downwardapi-volume-4319eecf-6e91-4202-b568-3147ed4bfc0f to disappear
Dec 17 23:59:32.967: INFO: Pod downwardapi-volume-4319eecf-6e91-4202-b568-3147ed4bfc0f no longer exists
[AfterEach] [sig-storage] Projected downwardAPI
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:59:32.968: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "projected-2901" for this suite.
Dec 17 23:59:39.012: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:59:39.112: INFO: namespace projected-2901 deletion completed in 6.131129753s

• [SLOW TEST:14.597 seconds]
[sig-storage] Projected downwardAPI
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34
  should provide container's cpu limit [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSS
------------------------------
[sig-apps] ReplicationController 
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:59:39.113: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename replication-controller
STEP: Waiting for a default service account to be provisioned in namespace
[It] should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
Dec 17 23:59:39.179: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace
STEP: Creating rc "condition-test" that asks for more than the allowed pod quota
STEP: Checking rc "condition-test" has the desired failure condition set
STEP: Scaling down rc "condition-test" to satisfy pod quota
Dec 17 23:59:41.649: INFO: Updating replication controller "condition-test"
STEP: Checking rc "condition-test" has no failure condition set
[AfterEach] [sig-apps] ReplicationController
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 17 23:59:41.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "replication-controller-9487" for this suite.
Dec 17 23:59:54.095: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 17 23:59:54.188: INFO: namespace replication-controller-9487 deletion completed in 12.211852203s

• [SLOW TEST:15.075 seconds]
[sig-apps] ReplicationController
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23
  should surface a failure condition on a common issue like exceeded quota [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSS
------------------------------
[sig-api-machinery] Namespaces [Serial] 
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 17 23:59:54.189: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename namespaces
STEP: Waiting for a default service account to be provisioned in namespace
[It] should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: Creating a test namespace
STEP: Waiting for a default service account to be provisioned in namespace
STEP: Creating a pod in the namespace
STEP: Waiting for the pod to have running status
STEP: Deleting the namespace
STEP: Waiting for the namespace to be removed.
STEP: Recreating the namespace
STEP: Verifying there are no pods in the namespace
[AfterEach] [sig-api-machinery] Namespaces [Serial]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 18 00:00:30.568: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "namespaces-8016" for this suite.
Dec 18 00:00:36.612: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 00:00:36.771: INFO: namespace namespaces-8016 deletion completed in 6.195921599s
STEP: Destroying namespace "nsdeletetest-7820" for this suite.
Dec 18 00:00:36.776: INFO: Namespace nsdeletetest-7820 was already deleted
STEP: Destroying namespace "nsdeletetest-2077" for this suite.
Dec 18 00:00:42.807: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 00:00:42.994: INFO: namespace nsdeletetest-2077 deletion completed in 6.218013934s

• [SLOW TEST:48.806 seconds]
[sig-api-machinery] Namespaces [Serial]
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23
  should ensure that all pods are removed when a namespace is deleted [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS
------------------------------
[k8s.io] Pods 
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151
STEP: Creating a kubernetes client
Dec 18 00:00:42.997: INFO: >>> kubeConfig: /root/.kube/config
STEP: Building a namespace api object, basename pods
STEP: Waiting for a default service account to be provisioned in namespace
[BeforeEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:165
[It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
STEP: creating the pod
STEP: submitting the pod to kubernetes
STEP: verifying the pod is in kubernetes
STEP: updating the pod
Dec 18 00:00:53.727: INFO: Successfully updated pod "pod-update-activedeadlineseconds-cb89ead1-dce8-4b18-8979-729786ca36f1"
Dec 18 00:00:53.727: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-cb89ead1-dce8-4b18-8979-729786ca36f1" in namespace "pods-9250" to be "terminated due to deadline exceeded"
Dec 18 00:00:53.759: INFO: Pod "pod-update-activedeadlineseconds-cb89ead1-dce8-4b18-8979-729786ca36f1": Phase="Running", Reason="", readiness=true. Elapsed: 31.797647ms
Dec 18 00:00:55.856: INFO: Pod "pod-update-activedeadlineseconds-cb89ead1-dce8-4b18-8979-729786ca36f1": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.128538882s
Dec 18 00:00:55.856: INFO: Pod "pod-update-activedeadlineseconds-cb89ead1-dce8-4b18-8979-729786ca36f1" satisfied condition "terminated due to deadline exceeded"
[AfterEach] [k8s.io] Pods
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152
Dec 18 00:00:55.856: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready
STEP: Destroying namespace "pods-9250" for this suite.
Dec 18 00:01:01.928: INFO: Waiting up to 30s for server preferred namespaced resources to be successfully discovered
Dec 18 00:01:02.053: INFO: namespace pods-9250 deletion completed in 6.185953503s

• [SLOW TEST:19.056 seconds]
[k8s.io] Pods
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:693
  should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]
  /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:698
------------------------------
SSSSSSSSSDec 18 00:01:02.054: INFO: Running AfterSuite actions on all nodes
Dec 18 00:01:02.054: INFO: Running AfterSuite actions on node 1
Dec 18 00:01:02.054: INFO: Skipping dumping logs from cluster


Summarizing 1 Failure:

[Fail] [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] [It] Should recreate evicted statefulset [Conformance] 
/src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:760

Ran 276 of 4897 Specs in 10271.100 seconds
FAIL! -- 275 Passed | 1 Failed | 0 Pending | 4621 Skipped
--- FAIL: TestE2E (10271.24s)
FAIL