I0528 21:09:54.743411 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0528 21:09:54.743759 6 e2e.go:109] Starting e2e run "6158ed5d-5c0a-4e3c-9d21-bdfbad1f01b2" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590700193 - Will randomize all specs Will run 278 of 4842 specs May 28 21:09:54.817: INFO: >>> kubeConfig: /root/.kube/config May 28 21:09:54.828: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 28 21:09:54.843: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 28 21:09:54.881: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 28 21:09:54.881: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 28 21:09:54.881: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 28 21:09:54.891: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 28 21:09:54.891: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 28 21:09:54.891: INFO: e2e test version: v1.17.4 May 28 21:09:54.892: INFO: kube-apiserver version: v1.17.2 May 28 21:09:54.892: INFO: >>> kubeConfig: /root/.kube/config May 28 21:09:54.897: INFO: Cluster IP family: ipv4 [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:09:54.897: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred May 28 21:09:54.962: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 28 21:09:54.964: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 28 21:09:54.973: INFO: Waiting for terminating namespaces to be deleted... May 28 21:09:54.975: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 28 21:09:54.994: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 21:09:54.994: INFO: Container kindnet-cni ready: true, restart count 2 May 28 21:09:54.994: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 21:09:54.994: INFO: Container kube-proxy ready: true, restart count 0 May 28 21:09:54.994: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 28 21:09:55.031: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 21:09:55.031: INFO: Container kube-proxy ready: true, restart count 0 May 28 21:09:55.031: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 28 21:09:55.031: INFO: Container kube-hunter ready: false, restart count 0 May 28 21:09:55.031: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 21:09:55.031: INFO: Container kindnet-cni ready: true, restart count 2 May 28 21:09:55.032: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 28 21:09:55.032: INFO: Container kube-bench ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-c34aad9d-754c-4e04-8b92-384e6228cda4 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-c34aad9d-754c-4e04-8b92-384e6228cda4 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-c34aad9d-754c-4e04-8b92-384e6228cda4 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:10:03.197: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-78" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.325 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":1,"skipped":0,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:10:03.223: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 28 21:10:03.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-823' May 28 21:10:05.983: INFO: stderr: "" May 28 21:10:05.983: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 28 21:10:05.983: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-823' May 28 21:10:06.111: INFO: stderr: "" May 28 21:10:06.111: INFO: stdout: "update-demo-nautilus-96zdr update-demo-nautilus-t7hfn " May 28 21:10:06.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-96zdr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-823' May 28 21:10:06.212: INFO: stderr: "" May 28 21:10:06.212: INFO: stdout: "" May 28 21:10:06.212: INFO: update-demo-nautilus-96zdr is created but not running May 28 21:10:11.213: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-823' May 28 21:10:11.313: INFO: stderr: "" May 28 21:10:11.313: INFO: stdout: "update-demo-nautilus-96zdr update-demo-nautilus-t7hfn " May 28 21:10:11.313: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-96zdr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-823' May 28 21:10:11.404: INFO: stderr: "" May 28 21:10:11.404: INFO: stdout: "true" May 28 21:10:11.404: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-96zdr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-823' May 28 21:10:11.502: INFO: stderr: "" May 28 21:10:11.502: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 28 21:10:11.502: INFO: validating pod update-demo-nautilus-96zdr May 28 21:10:11.513: INFO: got data: { "image": "nautilus.jpg" } May 28 21:10:11.513: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 28 21:10:11.513: INFO: update-demo-nautilus-96zdr is verified up and running May 28 21:10:11.513: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t7hfn -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-823' May 28 21:10:11.594: INFO: stderr: "" May 28 21:10:11.594: INFO: stdout: "true" May 28 21:10:11.594: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-t7hfn -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-823' May 28 21:10:11.678: INFO: stderr: "" May 28 21:10:11.678: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 28 21:10:11.678: INFO: validating pod update-demo-nautilus-t7hfn May 28 21:10:11.699: INFO: got data: { "image": "nautilus.jpg" } May 28 21:10:11.699: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 28 21:10:11.699: INFO: update-demo-nautilus-t7hfn is verified up and running STEP: scaling down the replication controller May 28 21:10:11.702: INFO: scanned /root for discovery docs: May 28 21:10:11.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-823' May 28 21:10:12.840: INFO: stderr: "" May 28 21:10:12.840: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 28 21:10:12.840: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-823' May 28 21:10:12.936: INFO: stderr: "" May 28 21:10:12.937: INFO: stdout: "update-demo-nautilus-96zdr update-demo-nautilus-t7hfn " STEP: Replicas for name=update-demo: expected=1 actual=2 May 28 21:10:17.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-823' May 28 21:10:18.038: INFO: stderr: "" May 28 21:10:18.038: INFO: stdout: "update-demo-nautilus-96zdr update-demo-nautilus-t7hfn " STEP: Replicas for name=update-demo: expected=1 actual=2 May 28 21:10:23.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-823' May 28 21:10:23.136: INFO: stderr: "" May 28 21:10:23.136: INFO: stdout: "update-demo-nautilus-96zdr " May 28 21:10:23.136: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-96zdr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-823' May 28 21:10:23.235: INFO: stderr: "" May 28 21:10:23.235: INFO: stdout: "true" May 28 21:10:23.235: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-96zdr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-823' May 28 21:10:23.338: INFO: stderr: "" May 28 21:10:23.339: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 28 21:10:23.339: INFO: validating pod update-demo-nautilus-96zdr May 28 21:10:23.342: INFO: got data: { "image": "nautilus.jpg" } May 28 21:10:23.342: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 28 21:10:23.342: INFO: update-demo-nautilus-96zdr is verified up and running STEP: scaling up the replication controller May 28 21:10:23.344: INFO: scanned /root for discovery docs: May 28 21:10:23.344: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-823' May 28 21:10:24.520: INFO: stderr: "" May 28 21:10:24.520: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 28 21:10:24.520: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-823' May 28 21:10:24.624: INFO: stderr: "" May 28 21:10:24.624: INFO: stdout: "update-demo-nautilus-96zdr update-demo-nautilus-lgtt7 " May 28 21:10:24.624: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-96zdr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-823' May 28 21:10:24.719: INFO: stderr: "" May 28 21:10:24.719: INFO: stdout: "true" May 28 21:10:24.719: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-96zdr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-823' May 28 21:10:24.857: INFO: stderr: "" May 28 21:10:24.857: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 28 21:10:24.857: INFO: validating pod update-demo-nautilus-96zdr May 28 21:10:24.861: INFO: got data: { "image": "nautilus.jpg" } May 28 21:10:24.861: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 28 21:10:24.861: INFO: update-demo-nautilus-96zdr is verified up and running May 28 21:10:24.861: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lgtt7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-823' May 28 21:10:24.959: INFO: stderr: "" May 28 21:10:24.959: INFO: stdout: "" May 28 21:10:24.959: INFO: update-demo-nautilus-lgtt7 is created but not running May 28 21:10:29.960: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-823' May 28 21:10:30.060: INFO: stderr: "" May 28 21:10:30.060: INFO: stdout: "update-demo-nautilus-96zdr update-demo-nautilus-lgtt7 " May 28 21:10:30.060: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-96zdr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-823' May 28 21:10:30.163: INFO: stderr: "" May 28 21:10:30.163: INFO: stdout: "true" May 28 21:10:30.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-96zdr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-823' May 28 21:10:30.255: INFO: stderr: "" May 28 21:10:30.255: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 28 21:10:30.255: INFO: validating pod update-demo-nautilus-96zdr May 28 21:10:30.268: INFO: got data: { "image": "nautilus.jpg" } May 28 21:10:30.268: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 28 21:10:30.268: INFO: update-demo-nautilus-96zdr is verified up and running May 28 21:10:30.268: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lgtt7 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-823' May 28 21:10:30.360: INFO: stderr: "" May 28 21:10:30.360: INFO: stdout: "true" May 28 21:10:30.360: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-lgtt7 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-823' May 28 21:10:30.454: INFO: stderr: "" May 28 21:10:30.454: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 28 21:10:30.454: INFO: validating pod update-demo-nautilus-lgtt7 May 28 21:10:30.458: INFO: got data: { "image": "nautilus.jpg" } May 28 21:10:30.458: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 28 21:10:30.458: INFO: update-demo-nautilus-lgtt7 is verified up and running STEP: using delete to clean up resources May 28 21:10:30.459: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-823' May 28 21:10:30.560: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 28 21:10:30.560: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 28 21:10:30.560: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-823' May 28 21:10:30.684: INFO: stderr: "No resources found in kubectl-823 namespace.\n" May 28 21:10:30.684: INFO: stdout: "" May 28 21:10:30.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-823 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 28 21:10:30.800: INFO: stderr: "" May 28 21:10:30.800: INFO: stdout: "update-demo-nautilus-96zdr\nupdate-demo-nautilus-lgtt7\n" May 28 21:10:31.300: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-823' May 28 21:10:31.391: INFO: stderr: "No resources found in kubectl-823 namespace.\n" May 28 21:10:31.391: INFO: stdout: "" May 28 21:10:31.391: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-823 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 28 21:10:31.488: INFO: stderr: "" May 28 21:10:31.488: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:10:31.488: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-823" for this suite. • [SLOW TEST:28.271 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":2,"skipped":59,"failed":0} SSS ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:10:31.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 28 21:10:31.649: INFO: Created pod &Pod{ObjectMeta:{dns-1778 dns-1778 /api/v1/namespaces/dns-1778/pods/dns-1778 cef70cd6-c5a2-4a45-a5aa-b08e25801ea6 19893552 0 2020-05-28 21:10:31 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-8tnd4,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-8tnd4,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-8tnd4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 28 21:10:35.669: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-1778 PodName:dns-1778 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 21:10:35.669: INFO: >>> kubeConfig: /root/.kube/config I0528 21:10:35.710358 6 log.go:172] (0xc001da4e70) (0xc0027d3c20) Create stream I0528 21:10:35.710398 6 log.go:172] (0xc001da4e70) (0xc0027d3c20) Stream added, broadcasting: 1 I0528 21:10:35.712533 6 log.go:172] (0xc001da4e70) Reply frame received for 1 I0528 21:10:35.712581 6 log.go:172] (0xc001da4e70) (0xc0029b37c0) Create stream I0528 21:10:35.712595 6 log.go:172] (0xc001da4e70) (0xc0029b37c0) Stream added, broadcasting: 3 I0528 21:10:35.713824 6 log.go:172] (0xc001da4e70) Reply frame received for 3 I0528 21:10:35.713857 6 log.go:172] (0xc001da4e70) (0xc0029b3860) Create stream I0528 21:10:35.713869 6 log.go:172] (0xc001da4e70) (0xc0029b3860) Stream added, broadcasting: 5 I0528 21:10:35.714670 6 log.go:172] (0xc001da4e70) Reply frame received for 5 I0528 21:10:35.804428 6 log.go:172] (0xc001da4e70) Data frame received for 3 I0528 21:10:35.804458 6 log.go:172] (0xc0029b37c0) (3) Data frame handling I0528 21:10:35.804477 6 log.go:172] (0xc0029b37c0) (3) Data frame sent I0528 21:10:35.806257 6 log.go:172] (0xc001da4e70) Data frame received for 3 I0528 21:10:35.806280 6 log.go:172] (0xc0029b37c0) (3) Data frame handling I0528 21:10:35.806479 6 log.go:172] (0xc001da4e70) Data frame received for 5 I0528 21:10:35.806510 6 log.go:172] (0xc0029b3860) (5) Data frame handling I0528 21:10:35.808206 6 log.go:172] (0xc001da4e70) Data frame received for 1 I0528 21:10:35.808234 6 log.go:172] (0xc0027d3c20) (1) Data frame handling I0528 21:10:35.808259 6 log.go:172] (0xc0027d3c20) (1) Data frame sent I0528 21:10:35.808286 6 log.go:172] (0xc001da4e70) (0xc0027d3c20) Stream removed, broadcasting: 1 I0528 21:10:35.808309 6 log.go:172] (0xc001da4e70) Go away received I0528 21:10:35.808754 6 log.go:172] (0xc001da4e70) (0xc0027d3c20) Stream removed, broadcasting: 1 I0528 21:10:35.808769 6 log.go:172] (0xc001da4e70) (0xc0029b37c0) Stream removed, broadcasting: 3 I0528 21:10:35.808777 6 log.go:172] (0xc001da4e70) (0xc0029b3860) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 28 21:10:35.808: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-1778 PodName:dns-1778 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 21:10:35.808: INFO: >>> kubeConfig: /root/.kube/config I0528 21:10:35.858463 6 log.go:172] (0xc001d42e70) (0xc0029b3b80) Create stream I0528 21:10:35.858502 6 log.go:172] (0xc001d42e70) (0xc0029b3b80) Stream added, broadcasting: 1 I0528 21:10:35.860658 6 log.go:172] (0xc001d42e70) Reply frame received for 1 I0528 21:10:35.860702 6 log.go:172] (0xc001d42e70) (0xc002846460) Create stream I0528 21:10:35.860716 6 log.go:172] (0xc001d42e70) (0xc002846460) Stream added, broadcasting: 3 I0528 21:10:35.861575 6 log.go:172] (0xc001d42e70) Reply frame received for 3 I0528 21:10:35.861608 6 log.go:172] (0xc001d42e70) (0xc00276b900) Create stream I0528 21:10:35.861623 6 log.go:172] (0xc001d42e70) (0xc00276b900) Stream added, broadcasting: 5 I0528 21:10:35.862359 6 log.go:172] (0xc001d42e70) Reply frame received for 5 I0528 21:10:35.931552 6 log.go:172] (0xc001d42e70) Data frame received for 3 I0528 21:10:35.931590 6 log.go:172] (0xc002846460) (3) Data frame handling I0528 21:10:35.931615 6 log.go:172] (0xc002846460) (3) Data frame sent I0528 21:10:35.933404 6 log.go:172] (0xc001d42e70) Data frame received for 5 I0528 21:10:35.933433 6 log.go:172] (0xc00276b900) (5) Data frame handling I0528 21:10:35.933467 6 log.go:172] (0xc001d42e70) Data frame received for 3 I0528 21:10:35.933505 6 log.go:172] (0xc002846460) (3) Data frame handling I0528 21:10:35.934883 6 log.go:172] (0xc001d42e70) Data frame received for 1 I0528 21:10:35.934902 6 log.go:172] (0xc0029b3b80) (1) Data frame handling I0528 21:10:35.934921 6 log.go:172] (0xc0029b3b80) (1) Data frame sent I0528 21:10:35.934958 6 log.go:172] (0xc001d42e70) (0xc0029b3b80) Stream removed, broadcasting: 1 I0528 21:10:35.935019 6 log.go:172] (0xc001d42e70) Go away received I0528 21:10:35.935052 6 log.go:172] (0xc001d42e70) (0xc0029b3b80) Stream removed, broadcasting: 1 I0528 21:10:35.935076 6 log.go:172] (0xc001d42e70) (0xc002846460) Stream removed, broadcasting: 3 I0528 21:10:35.935085 6 log.go:172] (0xc001d42e70) (0xc00276b900) Stream removed, broadcasting: 5 May 28 21:10:35.935: INFO: Deleting pod dns-1778... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:10:35.971: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1778" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":3,"skipped":62,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:10:36.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-6413/configmap-test-c218ae9e-fb66-49ce-b01b-83c1ae7d9a8d STEP: Creating a pod to test consume configMaps May 28 21:10:36.167: INFO: Waiting up to 5m0s for pod "pod-configmaps-5f88d932-2613-4c42-8a78-c26d33d462fb" in namespace "configmap-6413" to be "success or failure" May 28 21:10:36.175: INFO: Pod "pod-configmaps-5f88d932-2613-4c42-8a78-c26d33d462fb": Phase="Pending", Reason="", readiness=false. Elapsed: 8.079434ms May 28 21:10:38.179: INFO: Pod "pod-configmaps-5f88d932-2613-4c42-8a78-c26d33d462fb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01218814s May 28 21:10:40.183: INFO: Pod "pod-configmaps-5f88d932-2613-4c42-8a78-c26d33d462fb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.016166748s STEP: Saw pod success May 28 21:10:40.183: INFO: Pod "pod-configmaps-5f88d932-2613-4c42-8a78-c26d33d462fb" satisfied condition "success or failure" May 28 21:10:40.186: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-5f88d932-2613-4c42-8a78-c26d33d462fb container env-test: STEP: delete the pod May 28 21:10:40.223: INFO: Waiting for pod pod-configmaps-5f88d932-2613-4c42-8a78-c26d33d462fb to disappear May 28 21:10:40.230: INFO: Pod pod-configmaps-5f88d932-2613-4c42-8a78-c26d33d462fb no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:10:40.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-6413" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":4,"skipped":74,"failed":0} S ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:10:40.237: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:10:40.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7145" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":5,"skipped":75,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:10:40.343: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-1786 STEP: creating replication controller nodeport-test in namespace services-1786 I0528 21:10:40.463844 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-1786, replica count: 2 I0528 21:10:43.514321 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0528 21:10:46.514607 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 28 21:10:46.514: INFO: Creating new exec pod May 28 21:10:51.541: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1786 execpoddb8b9 -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 28 21:10:51.836: INFO: stderr: "I0528 21:10:51.670427 633 log.go:172] (0xc00002ed10) (0xc00088c000) Create stream\nI0528 21:10:51.670482 633 log.go:172] (0xc00002ed10) (0xc00088c000) Stream added, broadcasting: 1\nI0528 21:10:51.673044 633 log.go:172] (0xc00002ed10) Reply frame received for 1\nI0528 21:10:51.673083 633 log.go:172] (0xc00002ed10) (0xc0008da000) Create stream\nI0528 21:10:51.673096 633 log.go:172] (0xc00002ed10) (0xc0008da000) Stream added, broadcasting: 3\nI0528 21:10:51.674157 633 log.go:172] (0xc00002ed10) Reply frame received for 3\nI0528 21:10:51.674205 633 log.go:172] (0xc00002ed10) (0xc00088c0a0) Create stream\nI0528 21:10:51.674222 633 log.go:172] (0xc00002ed10) (0xc00088c0a0) Stream added, broadcasting: 5\nI0528 21:10:51.675151 633 log.go:172] (0xc00002ed10) Reply frame received for 5\nI0528 21:10:51.813406 633 log.go:172] (0xc00002ed10) Data frame received for 5\nI0528 21:10:51.813442 633 log.go:172] (0xc00088c0a0) (5) Data frame handling\nI0528 21:10:51.813486 633 log.go:172] (0xc00088c0a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0528 21:10:51.826961 633 log.go:172] (0xc00002ed10) Data frame received for 5\nI0528 21:10:51.827007 633 log.go:172] (0xc00088c0a0) (5) Data frame handling\nI0528 21:10:51.827045 633 log.go:172] (0xc00088c0a0) (5) Data frame sent\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0528 21:10:51.827079 633 log.go:172] (0xc00002ed10) Data frame received for 5\nI0528 21:10:51.827140 633 log.go:172] (0xc00088c0a0) (5) Data frame handling\nI0528 21:10:51.827222 633 log.go:172] (0xc00002ed10) Data frame received for 3\nI0528 21:10:51.827264 633 log.go:172] (0xc0008da000) (3) Data frame handling\nI0528 21:10:51.829512 633 log.go:172] (0xc00002ed10) Data frame received for 1\nI0528 21:10:51.829558 633 log.go:172] (0xc00088c000) (1) Data frame handling\nI0528 21:10:51.829585 633 log.go:172] (0xc00088c000) (1) Data frame sent\nI0528 21:10:51.829614 633 log.go:172] (0xc00002ed10) (0xc00088c000) Stream removed, broadcasting: 1\nI0528 21:10:51.829715 633 log.go:172] (0xc00002ed10) Go away received\nI0528 21:10:51.830159 633 log.go:172] (0xc00002ed10) (0xc00088c000) Stream removed, broadcasting: 1\nI0528 21:10:51.830185 633 log.go:172] (0xc00002ed10) (0xc0008da000) Stream removed, broadcasting: 3\nI0528 21:10:51.830200 633 log.go:172] (0xc00002ed10) (0xc00088c0a0) Stream removed, broadcasting: 5\n" May 28 21:10:51.836: INFO: stdout: "" May 28 21:10:51.837: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1786 execpoddb8b9 -- /bin/sh -x -c nc -zv -t -w 2 10.109.115.183 80' May 28 21:10:52.057: INFO: stderr: "I0528 21:10:51.982313 655 log.go:172] (0xc00059aa50) (0xc0006fbe00) Create stream\nI0528 21:10:51.982389 655 log.go:172] (0xc00059aa50) (0xc0006fbe00) Stream added, broadcasting: 1\nI0528 21:10:51.985319 655 log.go:172] (0xc00059aa50) Reply frame received for 1\nI0528 21:10:51.985362 655 log.go:172] (0xc00059aa50) (0xc0006fbea0) Create stream\nI0528 21:10:51.985379 655 log.go:172] (0xc00059aa50) (0xc0006fbea0) Stream added, broadcasting: 3\nI0528 21:10:51.986479 655 log.go:172] (0xc00059aa50) Reply frame received for 3\nI0528 21:10:51.986528 655 log.go:172] (0xc00059aa50) (0xc0006106e0) Create stream\nI0528 21:10:51.986544 655 log.go:172] (0xc00059aa50) (0xc0006106e0) Stream added, broadcasting: 5\nI0528 21:10:51.987423 655 log.go:172] (0xc00059aa50) Reply frame received for 5\nI0528 21:10:52.047708 655 log.go:172] (0xc00059aa50) Data frame received for 3\nI0528 21:10:52.047752 655 log.go:172] (0xc0006fbea0) (3) Data frame handling\nI0528 21:10:52.047795 655 log.go:172] (0xc00059aa50) Data frame received for 5\nI0528 21:10:52.047808 655 log.go:172] (0xc0006106e0) (5) Data frame handling\nI0528 21:10:52.047821 655 log.go:172] (0xc0006106e0) (5) Data frame sent\nI0528 21:10:52.047832 655 log.go:172] (0xc00059aa50) Data frame received for 5\nI0528 21:10:52.047842 655 log.go:172] (0xc0006106e0) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.115.183 80\nConnection to 10.109.115.183 80 port [tcp/http] succeeded!\nI0528 21:10:52.049610 655 log.go:172] (0xc00059aa50) Data frame received for 1\nI0528 21:10:52.049700 655 log.go:172] (0xc0006fbe00) (1) Data frame handling\nI0528 21:10:52.049738 655 log.go:172] (0xc0006fbe00) (1) Data frame sent\nI0528 21:10:52.050131 655 log.go:172] (0xc00059aa50) (0xc0006fbe00) Stream removed, broadcasting: 1\nI0528 21:10:52.050180 655 log.go:172] (0xc00059aa50) Go away received\nI0528 21:10:52.050451 655 log.go:172] (0xc00059aa50) (0xc0006fbe00) Stream removed, broadcasting: 1\nI0528 21:10:52.050476 655 log.go:172] (0xc00059aa50) (0xc0006fbea0) Stream removed, broadcasting: 3\nI0528 21:10:52.050492 655 log.go:172] (0xc00059aa50) (0xc0006106e0) Stream removed, broadcasting: 5\n" May 28 21:10:52.057: INFO: stdout: "" May 28 21:10:52.057: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1786 execpoddb8b9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 31853' May 28 21:10:52.272: INFO: stderr: "I0528 21:10:52.193575 675 log.go:172] (0xc0003e8210) (0xc000948000) Create stream\nI0528 21:10:52.193616 675 log.go:172] (0xc0003e8210) (0xc000948000) Stream added, broadcasting: 1\nI0528 21:10:52.196550 675 log.go:172] (0xc0003e8210) Reply frame received for 1\nI0528 21:10:52.196602 675 log.go:172] (0xc0003e8210) (0xc0009481e0) Create stream\nI0528 21:10:52.196621 675 log.go:172] (0xc0003e8210) (0xc0009481e0) Stream added, broadcasting: 3\nI0528 21:10:52.198025 675 log.go:172] (0xc0003e8210) Reply frame received for 3\nI0528 21:10:52.198059 675 log.go:172] (0xc0003e8210) (0xc00063bc20) Create stream\nI0528 21:10:52.198070 675 log.go:172] (0xc0003e8210) (0xc00063bc20) Stream added, broadcasting: 5\nI0528 21:10:52.199038 675 log.go:172] (0xc0003e8210) Reply frame received for 5\nI0528 21:10:52.262825 675 log.go:172] (0xc0003e8210) Data frame received for 5\nI0528 21:10:52.262870 675 log.go:172] (0xc00063bc20) (5) Data frame handling\nI0528 21:10:52.262892 675 log.go:172] (0xc00063bc20) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 31853\nConnection to 172.17.0.10 31853 port [tcp/31853] succeeded!\nI0528 21:10:52.262930 675 log.go:172] (0xc0003e8210) Data frame received for 3\nI0528 21:10:52.262971 675 log.go:172] (0xc0009481e0) (3) Data frame handling\nI0528 21:10:52.263042 675 log.go:172] (0xc0003e8210) Data frame received for 5\nI0528 21:10:52.263067 675 log.go:172] (0xc00063bc20) (5) Data frame handling\nI0528 21:10:52.265051 675 log.go:172] (0xc0003e8210) Data frame received for 1\nI0528 21:10:52.265075 675 log.go:172] (0xc000948000) (1) Data frame handling\nI0528 21:10:52.265088 675 log.go:172] (0xc000948000) (1) Data frame sent\nI0528 21:10:52.265101 675 log.go:172] (0xc0003e8210) (0xc000948000) Stream removed, broadcasting: 1\nI0528 21:10:52.265291 675 log.go:172] (0xc0003e8210) Go away received\nI0528 21:10:52.265738 675 log.go:172] (0xc0003e8210) (0xc000948000) Stream removed, broadcasting: 1\nI0528 21:10:52.265770 675 log.go:172] (0xc0003e8210) (0xc0009481e0) Stream removed, broadcasting: 3\nI0528 21:10:52.265792 675 log.go:172] (0xc0003e8210) (0xc00063bc20) Stream removed, broadcasting: 5\n" May 28 21:10:52.272: INFO: stdout: "" May 28 21:10:52.272: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1786 execpoddb8b9 -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 31853' May 28 21:10:52.458: INFO: stderr: "I0528 21:10:52.393993 697 log.go:172] (0xc000a889a0) (0xc000699c20) Create stream\nI0528 21:10:52.394049 697 log.go:172] (0xc000a889a0) (0xc000699c20) Stream added, broadcasting: 1\nI0528 21:10:52.397446 697 log.go:172] (0xc000a889a0) Reply frame received for 1\nI0528 21:10:52.397480 697 log.go:172] (0xc000a889a0) (0xc000699cc0) Create stream\nI0528 21:10:52.397490 697 log.go:172] (0xc000a889a0) (0xc000699cc0) Stream added, broadcasting: 3\nI0528 21:10:52.398687 697 log.go:172] (0xc000a889a0) Reply frame received for 3\nI0528 21:10:52.398740 697 log.go:172] (0xc000a889a0) (0xc000a72000) Create stream\nI0528 21:10:52.398751 697 log.go:172] (0xc000a889a0) (0xc000a72000) Stream added, broadcasting: 5\nI0528 21:10:52.399741 697 log.go:172] (0xc000a889a0) Reply frame received for 5\nI0528 21:10:52.450223 697 log.go:172] (0xc000a889a0) Data frame received for 3\nI0528 21:10:52.450247 697 log.go:172] (0xc000699cc0) (3) Data frame handling\nI0528 21:10:52.450272 697 log.go:172] (0xc000a889a0) Data frame received for 5\nI0528 21:10:52.450291 697 log.go:172] (0xc000a72000) (5) Data frame handling\nI0528 21:10:52.450307 697 log.go:172] (0xc000a72000) (5) Data frame sent\nI0528 21:10:52.450314 697 log.go:172] (0xc000a889a0) Data frame received for 5\nI0528 21:10:52.450319 697 log.go:172] (0xc000a72000) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 31853\nConnection to 172.17.0.8 31853 port [tcp/31853] succeeded!\nI0528 21:10:52.452366 697 log.go:172] (0xc000a889a0) Data frame received for 1\nI0528 21:10:52.452389 697 log.go:172] (0xc000699c20) (1) Data frame handling\nI0528 21:10:52.452405 697 log.go:172] (0xc000699c20) (1) Data frame sent\nI0528 21:10:52.452432 697 log.go:172] (0xc000a889a0) (0xc000699c20) Stream removed, broadcasting: 1\nI0528 21:10:52.452461 697 log.go:172] (0xc000a889a0) Go away received\nI0528 21:10:52.452767 697 log.go:172] (0xc000a889a0) (0xc000699c20) Stream removed, broadcasting: 1\nI0528 21:10:52.452787 697 log.go:172] (0xc000a889a0) (0xc000699cc0) Stream removed, broadcasting: 3\nI0528 21:10:52.452796 697 log.go:172] (0xc000a889a0) (0xc000a72000) Stream removed, broadcasting: 5\n" May 28 21:10:52.458: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:10:52.458: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1786" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.124 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":6,"skipped":111,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:10:52.468: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 28 21:11:02.617: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 28 21:11:02.626: INFO: Pod pod-with-prestop-exec-hook still exists May 28 21:11:04.626: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 28 21:11:04.631: INFO: Pod pod-with-prestop-exec-hook still exists May 28 21:11:06.626: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 28 21:11:06.631: INFO: Pod pod-with-prestop-exec-hook still exists May 28 21:11:08.626: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 28 21:11:08.631: INFO: Pod pod-with-prestop-exec-hook still exists May 28 21:11:10.626: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 28 21:11:10.631: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:11:10.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8630" for this suite. • [SLOW TEST:18.179 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":7,"skipped":142,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:11:10.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:11:10.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 28 21:11:10.839: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-28T21:11:10Z generation:1 name:name1 resourceVersion:19893860 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a1281d3c-f66b-4a98-8fe3-a019d8602132] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 28 21:11:20.844: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-28T21:11:20Z generation:1 name:name2 resourceVersion:19893905 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:012a6783-5ceb-49bd-aab0-800e4200b498] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 28 21:11:30.852: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-28T21:11:10Z generation:2 name:name1 resourceVersion:19893936 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a1281d3c-f66b-4a98-8fe3-a019d8602132] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 28 21:11:40.858: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-28T21:11:20Z generation:2 name:name2 resourceVersion:19893967 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:012a6783-5ceb-49bd-aab0-800e4200b498] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 28 21:11:50.865: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-28T21:11:10Z generation:2 name:name1 resourceVersion:19893998 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:a1281d3c-f66b-4a98-8fe3-a019d8602132] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 28 21:12:00.873: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-28T21:11:20Z generation:2 name:name2 resourceVersion:19894028 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:012a6783-5ceb-49bd-aab0-800e4200b498] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:12:11.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-6292" for this suite. • [SLOW TEST:60.743 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":8,"skipped":147,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:12:11.391: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-0378e203-6062-4917-a780-1d4f491a135d STEP: Creating secret with name s-test-opt-upd-7d3a8e5b-ff82-4ea8-914c-3ed1e380c012 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-0378e203-6062-4917-a780-1d4f491a135d STEP: Updating secret s-test-opt-upd-7d3a8e5b-ff82-4ea8-914c-3ed1e380c012 STEP: Creating secret with name s-test-opt-create-6a8eda4e-ff79-4aa5-8f19-a39aac9a97c7 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:12:19.671: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8231" for this suite. • [SLOW TEST:8.289 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":154,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:12:19.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 21:12:19.777: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c5d66314-368f-44c6-a443-2bc41149cee3" in namespace "downward-api-6792" to be "success or failure" May 28 21:12:19.780: INFO: Pod "downwardapi-volume-c5d66314-368f-44c6-a443-2bc41149cee3": Phase="Pending", Reason="", readiness=false. Elapsed: 3.552165ms May 28 21:12:21.784: INFO: Pod "downwardapi-volume-c5d66314-368f-44c6-a443-2bc41149cee3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00737124s May 28 21:12:23.789: INFO: Pod "downwardapi-volume-c5d66314-368f-44c6-a443-2bc41149cee3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012357175s STEP: Saw pod success May 28 21:12:23.789: INFO: Pod "downwardapi-volume-c5d66314-368f-44c6-a443-2bc41149cee3" satisfied condition "success or failure" May 28 21:12:23.792: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-c5d66314-368f-44c6-a443-2bc41149cee3 container client-container: STEP: delete the pod May 28 21:12:23.834: INFO: Waiting for pod downwardapi-volume-c5d66314-368f-44c6-a443-2bc41149cee3 to disappear May 28 21:12:23.841: INFO: Pod downwardapi-volume-c5d66314-368f-44c6-a443-2bc41149cee3 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:12:23.841: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6792" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":172,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:12:23.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-50542b87-ec6b-409b-a9d9-f7fed0621483 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:12:23.897: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-364" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":11,"skipped":188,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:12:23.920: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:12:23.996: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 28 21:12:25.910: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2883 create -f -' May 28 21:12:29.420: INFO: stderr: "" May 28 21:12:29.420: INFO: stdout: "e2e-test-crd-publish-openapi-5412-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 28 21:12:29.420: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2883 delete e2e-test-crd-publish-openapi-5412-crds test-cr' May 28 21:12:29.603: INFO: stderr: "" May 28 21:12:29.603: INFO: stdout: "e2e-test-crd-publish-openapi-5412-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 28 21:12:29.603: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2883 apply -f -' May 28 21:12:30.430: INFO: stderr: "" May 28 21:12:30.430: INFO: stdout: "e2e-test-crd-publish-openapi-5412-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 28 21:12:30.430: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2883 delete e2e-test-crd-publish-openapi-5412-crds test-cr' May 28 21:12:30.525: INFO: stderr: "" May 28 21:12:30.525: INFO: stdout: "e2e-test-crd-publish-openapi-5412-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 28 21:12:30.525: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5412-crds' May 28 21:12:30.763: INFO: stderr: "" May 28 21:12:30.763: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5412-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:12:32.651: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2883" for this suite. • [SLOW TEST:8.739 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":12,"skipped":205,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:12:32.658: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:12:36.934: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-2960" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":13,"skipped":223,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:12:36.972: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-37581b45-9bc9-4b56-864a-d637f11a5b40 STEP: Creating a pod to test consume configMaps May 28 21:12:37.041: INFO: Waiting up to 5m0s for pod "pod-configmaps-4073a1db-583e-4b59-8882-b697fd1b5cb5" in namespace "configmap-8304" to be "success or failure" May 28 21:12:37.045: INFO: Pod "pod-configmaps-4073a1db-583e-4b59-8882-b697fd1b5cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.439375ms May 28 21:12:39.049: INFO: Pod "pod-configmaps-4073a1db-583e-4b59-8882-b697fd1b5cb5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008773792s May 28 21:12:41.054: INFO: Pod "pod-configmaps-4073a1db-583e-4b59-8882-b697fd1b5cb5": Phase="Running", Reason="", readiness=true. Elapsed: 4.013201204s May 28 21:12:43.059: INFO: Pod "pod-configmaps-4073a1db-583e-4b59-8882-b697fd1b5cb5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.01798246s STEP: Saw pod success May 28 21:12:43.059: INFO: Pod "pod-configmaps-4073a1db-583e-4b59-8882-b697fd1b5cb5" satisfied condition "success or failure" May 28 21:12:43.062: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-4073a1db-583e-4b59-8882-b697fd1b5cb5 container configmap-volume-test: STEP: delete the pod May 28 21:12:43.140: INFO: Waiting for pod pod-configmaps-4073a1db-583e-4b59-8882-b697fd1b5cb5 to disappear May 28 21:12:43.165: INFO: Pod pod-configmaps-4073a1db-583e-4b59-8882-b697fd1b5cb5 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:12:43.165: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8304" for this suite. • [SLOW TEST:6.201 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":230,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:12:43.174: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 21:12:43.305: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f45b521-e1cc-404e-bdcd-60b043e6ca96" in namespace "downward-api-3436" to be "success or failure" May 28 21:12:43.314: INFO: Pod "downwardapi-volume-0f45b521-e1cc-404e-bdcd-60b043e6ca96": Phase="Pending", Reason="", readiness=false. Elapsed: 9.158138ms May 28 21:12:45.319: INFO: Pod "downwardapi-volume-0f45b521-e1cc-404e-bdcd-60b043e6ca96": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013673026s May 28 21:12:47.323: INFO: Pod "downwardapi-volume-0f45b521-e1cc-404e-bdcd-60b043e6ca96": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017966713s STEP: Saw pod success May 28 21:12:47.323: INFO: Pod "downwardapi-volume-0f45b521-e1cc-404e-bdcd-60b043e6ca96" satisfied condition "success or failure" May 28 21:12:47.326: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-0f45b521-e1cc-404e-bdcd-60b043e6ca96 container client-container: STEP: delete the pod May 28 21:12:47.387: INFO: Waiting for pod downwardapi-volume-0f45b521-e1cc-404e-bdcd-60b043e6ca96 to disappear May 28 21:12:47.395: INFO: Pod downwardapi-volume-0f45b521-e1cc-404e-bdcd-60b043e6ca96 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:12:47.395: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3436" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":15,"skipped":281,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:12:47.436: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 21:12:47.870: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 21:12:49.882: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297167, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297167, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297167, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297167, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 21:12:52.926: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:12:53.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-344" for this suite. STEP: Destroying namespace "webhook-344-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.796 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":16,"skipped":296,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:12:53.233: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:13:06.500: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7867" for this suite. • [SLOW TEST:13.277 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":17,"skipped":321,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:13:06.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 28 21:13:06.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4964' May 28 21:13:08.858: INFO: stderr: "" May 28 21:13:08.858: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 28 21:13:08.858: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4964' May 28 21:13:09.020: INFO: stderr: "" May 28 21:13:09.020: INFO: stdout: "update-demo-nautilus-s4h89 update-demo-nautilus-tkmh4 " May 28 21:13:09.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s4h89 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4964' May 28 21:13:09.130: INFO: stderr: "" May 28 21:13:09.130: INFO: stdout: "" May 28 21:13:09.130: INFO: update-demo-nautilus-s4h89 is created but not running May 28 21:13:14.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4964' May 28 21:13:14.242: INFO: stderr: "" May 28 21:13:14.242: INFO: stdout: "update-demo-nautilus-s4h89 update-demo-nautilus-tkmh4 " May 28 21:13:14.242: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s4h89 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4964' May 28 21:13:14.338: INFO: stderr: "" May 28 21:13:14.338: INFO: stdout: "true" May 28 21:13:14.338: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-s4h89 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4964' May 28 21:13:14.432: INFO: stderr: "" May 28 21:13:14.432: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 28 21:13:14.432: INFO: validating pod update-demo-nautilus-s4h89 May 28 21:13:14.443: INFO: got data: { "image": "nautilus.jpg" } May 28 21:13:14.443: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 28 21:13:14.443: INFO: update-demo-nautilus-s4h89 is verified up and running May 28 21:13:14.443: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tkmh4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4964' May 28 21:13:14.549: INFO: stderr: "" May 28 21:13:14.549: INFO: stdout: "true" May 28 21:13:14.549: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-tkmh4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4964' May 28 21:13:14.644: INFO: stderr: "" May 28 21:13:14.644: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 28 21:13:14.644: INFO: validating pod update-demo-nautilus-tkmh4 May 28 21:13:14.658: INFO: got data: { "image": "nautilus.jpg" } May 28 21:13:14.658: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 28 21:13:14.658: INFO: update-demo-nautilus-tkmh4 is verified up and running STEP: rolling-update to new replication controller May 28 21:13:14.660: INFO: scanned /root for discovery docs: May 28 21:13:14.660: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-4964' May 28 21:13:37.354: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 28 21:13:37.354: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 28 21:13:37.355: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4964' May 28 21:13:37.455: INFO: stderr: "" May 28 21:13:37.455: INFO: stdout: "update-demo-kitten-p7b57 update-demo-kitten-wh7hr update-demo-nautilus-s4h89 " STEP: Replicas for name=update-demo: expected=2 actual=3 May 28 21:13:42.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4964' May 28 21:13:42.571: INFO: stderr: "" May 28 21:13:42.571: INFO: stdout: "update-demo-kitten-p7b57 update-demo-kitten-wh7hr " May 28 21:13:42.571: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-p7b57 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4964' May 28 21:13:42.658: INFO: stderr: "" May 28 21:13:42.658: INFO: stdout: "true" May 28 21:13:42.658: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-p7b57 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4964' May 28 21:13:42.749: INFO: stderr: "" May 28 21:13:42.749: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 28 21:13:42.749: INFO: validating pod update-demo-kitten-p7b57 May 28 21:13:42.760: INFO: got data: { "image": "kitten.jpg" } May 28 21:13:42.760: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 28 21:13:42.760: INFO: update-demo-kitten-p7b57 is verified up and running May 28 21:13:42.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wh7hr -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4964' May 28 21:13:42.860: INFO: stderr: "" May 28 21:13:42.860: INFO: stdout: "true" May 28 21:13:42.860: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-wh7hr -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4964' May 28 21:13:42.952: INFO: stderr: "" May 28 21:13:42.952: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 28 21:13:42.952: INFO: validating pod update-demo-kitten-wh7hr May 28 21:13:42.957: INFO: got data: { "image": "kitten.jpg" } May 28 21:13:42.957: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 28 21:13:42.957: INFO: update-demo-kitten-wh7hr is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:13:42.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4964" for this suite. • [SLOW TEST:36.454 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":18,"skipped":322,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:13:42.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-4264a9fe-edf7-4c4f-8694-3ded1ed6abf1 STEP: Creating a pod to test consume configMaps May 28 21:13:43.088: INFO: Waiting up to 5m0s for pod "pod-configmaps-4af63533-93ee-4caa-940d-0d22f2b42a5d" in namespace "configmap-8269" to be "success or failure" May 28 21:13:43.144: INFO: Pod "pod-configmaps-4af63533-93ee-4caa-940d-0d22f2b42a5d": Phase="Pending", Reason="", readiness=false. Elapsed: 56.542111ms May 28 21:13:45.155: INFO: Pod "pod-configmaps-4af63533-93ee-4caa-940d-0d22f2b42a5d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066714629s May 28 21:13:47.193: INFO: Pod "pod-configmaps-4af63533-93ee-4caa-940d-0d22f2b42a5d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105548664s STEP: Saw pod success May 28 21:13:47.194: INFO: Pod "pod-configmaps-4af63533-93ee-4caa-940d-0d22f2b42a5d" satisfied condition "success or failure" May 28 21:13:47.197: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-4af63533-93ee-4caa-940d-0d22f2b42a5d container configmap-volume-test: STEP: delete the pod May 28 21:13:47.251: INFO: Waiting for pod pod-configmaps-4af63533-93ee-4caa-940d-0d22f2b42a5d to disappear May 28 21:13:47.419: INFO: Pod pod-configmaps-4af63533-93ee-4caa-940d-0d22f2b42a5d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:13:47.419: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8269" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":19,"skipped":345,"failed":0} S ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:13:47.426: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:13:47.607: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:13:48.691: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8338" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":20,"skipped":346,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:13:48.855: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 28 21:13:49.040: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1506' May 28 21:13:49.150: INFO: stderr: "" May 28 21:13:49.150: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 28 21:13:54.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1506 -o json' May 28 21:13:54.297: INFO: stderr: "" May 28 21:13:54.297: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-28T21:13:49Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1506\",\n \"resourceVersion\": \"19894802\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1506/pods/e2e-test-httpd-pod\",\n \"uid\": \"4ed0ba8e-0242-41c6-88c8-197f71c0fbec\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-qhvkj\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker2\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-qhvkj\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-qhvkj\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-28T21:13:49Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-28T21:13:52Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-28T21:13:52Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-28T21:13:49Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://92b3cf31420ca87e9c4c8737178fe8d7a6df365725430f2e2382b26c30d3b32f\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-28T21:13:51Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.8\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.2.15\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.2.15\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-28T21:13:49Z\"\n }\n}\n" STEP: replace the image in the pod May 28 21:13:54.298: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1506' May 28 21:13:54.571: INFO: stderr: "" May 28 21:13:54.571: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 28 21:13:54.595: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1506' May 28 21:14:09.486: INFO: stderr: "" May 28 21:14:09.486: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:14:09.486: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1506" for this suite. • [SLOW TEST:20.639 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":21,"skipped":372,"failed":0} [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:14:09.494: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 28 21:14:14.155: INFO: Successfully updated pod "labelsupdate56dc41c0-791f-4dbc-b277-226a5e9eead2" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:14:18.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2100" for this suite. • [SLOW TEST:8.716 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":372,"failed":0} SSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:14:18.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 28 21:14:18.338: INFO: Waiting up to 5m0s for pod "pod-27a43eee-81d2-4f8f-9ee4-a216460d69d4" in namespace "emptydir-9569" to be "success or failure" May 28 21:14:18.341: INFO: Pod "pod-27a43eee-81d2-4f8f-9ee4-a216460d69d4": Phase="Pending", Reason="", readiness=false. Elapsed: 3.025109ms May 28 21:14:20.435: INFO: Pod "pod-27a43eee-81d2-4f8f-9ee4-a216460d69d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.096824183s May 28 21:14:22.440: INFO: Pod "pod-27a43eee-81d2-4f8f-9ee4-a216460d69d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.101570439s STEP: Saw pod success May 28 21:14:22.440: INFO: Pod "pod-27a43eee-81d2-4f8f-9ee4-a216460d69d4" satisfied condition "success or failure" May 28 21:14:22.443: INFO: Trying to get logs from node jerma-worker2 pod pod-27a43eee-81d2-4f8f-9ee4-a216460d69d4 container test-container: STEP: delete the pod May 28 21:14:22.567: INFO: Waiting for pod pod-27a43eee-81d2-4f8f-9ee4-a216460d69d4 to disappear May 28 21:14:22.709: INFO: Pod pod-27a43eee-81d2-4f8f-9ee4-a216460d69d4 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:14:22.709: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9569" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":380,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:14:22.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-e3382b51-e4c3-4bac-93ed-747999af685d STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:14:28.859: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7853" for this suite. • [SLOW TEST:6.143 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":24,"skipped":390,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:14:28.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 28 21:14:28.922: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 28 21:14:28.971: INFO: Waiting for terminating namespaces to be deleted... May 28 21:14:28.974: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 28 21:14:28.980: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 21:14:28.980: INFO: Container kube-proxy ready: true, restart count 0 May 28 21:14:28.980: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 21:14:28.980: INFO: Container kindnet-cni ready: true, restart count 2 May 28 21:14:28.980: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 28 21:14:28.986: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 28 21:14:28.986: INFO: Container kube-hunter ready: false, restart count 0 May 28 21:14:28.986: INFO: pod-configmaps-875ce855-efe0-4c51-8287-07dc0e01b41a from configmap-7853 started at 2020-05-28 21:14:22 +0000 UTC (2 container statuses recorded) May 28 21:14:28.986: INFO: Container configmap-volume-binary-test ready: false, restart count 0 May 28 21:14:28.986: INFO: Container configmap-volume-data-test ready: true, restart count 0 May 28 21:14:28.986: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 21:14:28.986: INFO: Container kindnet-cni ready: true, restart count 2 May 28 21:14:28.986: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 28 21:14:28.986: INFO: Container kube-bench ready: false, restart count 0 May 28 21:14:28.986: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 21:14:28.986: INFO: Container kube-proxy ready: true, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.16134da60ee0c024], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.16134da60fc4461a], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:14:30.005: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-2269" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":25,"skipped":412,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:14:30.015: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 21:14:30.102: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2539330f-82f5-4479-812c-e010722ee841" in namespace "projected-4787" to be "success or failure" May 28 21:14:30.106: INFO: Pod "downwardapi-volume-2539330f-82f5-4479-812c-e010722ee841": Phase="Pending", Reason="", readiness=false. Elapsed: 3.414314ms May 28 21:14:32.228: INFO: Pod "downwardapi-volume-2539330f-82f5-4479-812c-e010722ee841": Phase="Pending", Reason="", readiness=false. Elapsed: 2.125506468s May 28 21:14:34.248: INFO: Pod "downwardapi-volume-2539330f-82f5-4479-812c-e010722ee841": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.145398712s STEP: Saw pod success May 28 21:14:34.248: INFO: Pod "downwardapi-volume-2539330f-82f5-4479-812c-e010722ee841" satisfied condition "success or failure" May 28 21:14:34.250: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2539330f-82f5-4479-812c-e010722ee841 container client-container: STEP: delete the pod May 28 21:14:34.264: INFO: Waiting for pod downwardapi-volume-2539330f-82f5-4479-812c-e010722ee841 to disappear May 28 21:14:34.269: INFO: Pod downwardapi-volume-2539330f-82f5-4479-812c-e010722ee841 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:14:34.269: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4787" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":418,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:14:34.275: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 28 21:14:34.511: INFO: Waiting up to 5m0s for pod "var-expansion-1517bdbe-9b1e-4223-bb12-5341112aadb0" in namespace "var-expansion-739" to be "success or failure" May 28 21:14:34.603: INFO: Pod "var-expansion-1517bdbe-9b1e-4223-bb12-5341112aadb0": Phase="Pending", Reason="", readiness=false. Elapsed: 91.530561ms May 28 21:14:36.662: INFO: Pod "var-expansion-1517bdbe-9b1e-4223-bb12-5341112aadb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.15021133s May 28 21:14:38.666: INFO: Pod "var-expansion-1517bdbe-9b1e-4223-bb12-5341112aadb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.154634018s STEP: Saw pod success May 28 21:14:38.666: INFO: Pod "var-expansion-1517bdbe-9b1e-4223-bb12-5341112aadb0" satisfied condition "success or failure" May 28 21:14:38.670: INFO: Trying to get logs from node jerma-worker pod var-expansion-1517bdbe-9b1e-4223-bb12-5341112aadb0 container dapi-container: STEP: delete the pod May 28 21:14:38.690: INFO: Waiting for pod var-expansion-1517bdbe-9b1e-4223-bb12-5341112aadb0 to disappear May 28 21:14:38.701: INFO: Pod var-expansion-1517bdbe-9b1e-4223-bb12-5341112aadb0 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:14:38.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-739" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":27,"skipped":434,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:14:38.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 28 21:14:38.793: INFO: Waiting up to 5m0s for pod "client-containers-ce7c1480-2b77-445f-a12f-9a3822e12cb6" in namespace "containers-9791" to be "success or failure" May 28 21:14:38.814: INFO: Pod "client-containers-ce7c1480-2b77-445f-a12f-9a3822e12cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 21.022198ms May 28 21:14:40.818: INFO: Pod "client-containers-ce7c1480-2b77-445f-a12f-9a3822e12cb6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025487856s May 28 21:14:42.822: INFO: Pod "client-containers-ce7c1480-2b77-445f-a12f-9a3822e12cb6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02898237s STEP: Saw pod success May 28 21:14:42.822: INFO: Pod "client-containers-ce7c1480-2b77-445f-a12f-9a3822e12cb6" satisfied condition "success or failure" May 28 21:14:42.825: INFO: Trying to get logs from node jerma-worker pod client-containers-ce7c1480-2b77-445f-a12f-9a3822e12cb6 container test-container: STEP: delete the pod May 28 21:14:42.870: INFO: Waiting for pod client-containers-ce7c1480-2b77-445f-a12f-9a3822e12cb6 to disappear May 28 21:14:42.939: INFO: Pod client-containers-ce7c1480-2b77-445f-a12f-9a3822e12cb6 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:14:42.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9791" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":444,"failed":0} SSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:14:42.947: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:14:43.001: INFO: Creating deployment "test-recreate-deployment" May 28 21:14:43.010: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 28 21:14:43.063: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 28 21:14:45.071: INFO: Waiting deployment "test-recreate-deployment" to complete May 28 21:14:45.074: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297283, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297283, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297283, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297283, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 21:14:47.078: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 28 21:14:47.086: INFO: Updating deployment test-recreate-deployment May 28 21:14:47.086: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 28 21:14:47.666: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-3444 /apis/apps/v1/namespaces/deployment-3444/deployments/test-recreate-deployment bafe43d8-6dfc-4637-a301-f18e99287175 19895217 2 2020-05-28 21:14:43 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000cbe608 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-28 21:14:47 +0000 UTC,LastTransitionTime:2020-05-28 21:14:47 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-28 21:14:47 +0000 UTC,LastTransitionTime:2020-05-28 21:14:43 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 28 21:14:47.670: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-3444 /apis/apps/v1/namespaces/deployment-3444/replicasets/test-recreate-deployment-5f94c574ff 052b11cd-4d27-45fd-ba5f-075b2b7161f8 19895215 1 2020-05-28 21:14:47 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment bafe43d8-6dfc-4637-a301-f18e99287175 0xc000cbed07 0xc000cbed08}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000cbed78 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 28 21:14:47.670: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 28 21:14:47.670: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-3444 /apis/apps/v1/namespaces/deployment-3444/replicasets/test-recreate-deployment-799c574856 b423d945-5311-4472-9f5d-c255ed9e5f79 19895205 2 2020-05-28 21:14:43 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment bafe43d8-6dfc-4637-a301-f18e99287175 0xc000cbede7 0xc000cbede8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000cbee58 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 28 21:14:47.684: INFO: Pod "test-recreate-deployment-5f94c574ff-p27cf" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-p27cf test-recreate-deployment-5f94c574ff- deployment-3444 /api/v1/namespaces/deployment-3444/pods/test-recreate-deployment-5f94c574ff-p27cf 960e7883-edaf-4b76-ab81-1b9e539f46cb 19895216 0 2020-05-28 21:14:47 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 052b11cd-4d27-45fd-ba5f-075b2b7161f8 0xc000e7d9b7 0xc000e7d9b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-97zjv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-97zjv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-97zjv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 21:14:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 21:14:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 21:14:47 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 21:14:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-28 21:14:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:14:47.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3444" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":29,"skipped":451,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:14:47.700: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-0991817f-8f84-4e82-8966-7a5ed39dc59c STEP: Creating a pod to test consume configMaps May 28 21:14:47.842: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7cf58d43-2168-4c1b-a0f1-a6128f74955e" in namespace "projected-4593" to be "success or failure" May 28 21:14:47.854: INFO: Pod "pod-projected-configmaps-7cf58d43-2168-4c1b-a0f1-a6128f74955e": Phase="Pending", Reason="", readiness=false. Elapsed: 12.475984ms May 28 21:14:49.858: INFO: Pod "pod-projected-configmaps-7cf58d43-2168-4c1b-a0f1-a6128f74955e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016722377s May 28 21:14:52.056: INFO: Pod "pod-projected-configmaps-7cf58d43-2168-4c1b-a0f1-a6128f74955e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.214788068s STEP: Saw pod success May 28 21:14:52.057: INFO: Pod "pod-projected-configmaps-7cf58d43-2168-4c1b-a0f1-a6128f74955e" satisfied condition "success or failure" May 28 21:14:52.060: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-7cf58d43-2168-4c1b-a0f1-a6128f74955e container projected-configmap-volume-test: STEP: delete the pod May 28 21:14:52.127: INFO: Waiting for pod pod-projected-configmaps-7cf58d43-2168-4c1b-a0f1-a6128f74955e to disappear May 28 21:14:52.145: INFO: Pod pod-projected-configmaps-7cf58d43-2168-4c1b-a0f1-a6128f74955e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:14:52.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4593" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":455,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:14:52.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 21:14:52.281: INFO: Waiting up to 5m0s for pod "downwardapi-volume-777e9ae1-b924-41e2-bc03-4403563d3f4e" in namespace "projected-2624" to be "success or failure" May 28 21:14:52.283: INFO: Pod "downwardapi-volume-777e9ae1-b924-41e2-bc03-4403563d3f4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.501289ms May 28 21:14:54.399: INFO: Pod "downwardapi-volume-777e9ae1-b924-41e2-bc03-4403563d3f4e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118038905s May 28 21:14:56.403: INFO: Pod "downwardapi-volume-777e9ae1-b924-41e2-bc03-4403563d3f4e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.121980199s STEP: Saw pod success May 28 21:14:56.403: INFO: Pod "downwardapi-volume-777e9ae1-b924-41e2-bc03-4403563d3f4e" satisfied condition "success or failure" May 28 21:14:56.406: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-777e9ae1-b924-41e2-bc03-4403563d3f4e container client-container: STEP: delete the pod May 28 21:14:56.476: INFO: Waiting for pod downwardapi-volume-777e9ae1-b924-41e2-bc03-4403563d3f4e to disappear May 28 21:14:56.478: INFO: Pod downwardapi-volume-777e9ae1-b924-41e2-bc03-4403563d3f4e no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:14:56.478: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2624" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":31,"skipped":532,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:14:56.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 28 21:14:56.696: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:15:04.770: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-600" for this suite. • [SLOW TEST:8.316 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":32,"skipped":539,"failed":0} SS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:15:04.860: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:15:05.031: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-2861" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":33,"skipped":541,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:15:05.038: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-23e2b7eb-861b-4823-b46c-5beaa34a4c1c in namespace container-probe-1842 May 28 21:15:11.145: INFO: Started pod liveness-23e2b7eb-861b-4823-b46c-5beaa34a4c1c in namespace container-probe-1842 STEP: checking the pod's current state and verifying that restartCount is present May 28 21:15:11.148: INFO: Initial restart count of pod liveness-23e2b7eb-861b-4823-b46c-5beaa34a4c1c is 0 May 28 21:15:27.409: INFO: Restart count of pod container-probe-1842/liveness-23e2b7eb-861b-4823-b46c-5beaa34a4c1c is now 1 (16.261487884s elapsed) May 28 21:15:45.445: INFO: Restart count of pod container-probe-1842/liveness-23e2b7eb-861b-4823-b46c-5beaa34a4c1c is now 2 (34.297190617s elapsed) May 28 21:16:05.500: INFO: Restart count of pod container-probe-1842/liveness-23e2b7eb-861b-4823-b46c-5beaa34a4c1c is now 3 (54.351995028s elapsed) May 28 21:16:27.546: INFO: Restart count of pod container-probe-1842/liveness-23e2b7eb-861b-4823-b46c-5beaa34a4c1c is now 4 (1m16.398294103s elapsed) May 28 21:17:25.720: INFO: Restart count of pod container-probe-1842/liveness-23e2b7eb-861b-4823-b46c-5beaa34a4c1c is now 5 (2m14.572558867s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:17:25.736: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1842" for this suite. • [SLOW TEST:140.710 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":556,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:17:25.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 28 21:17:25.850: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-5074" to be "success or failure" May 28 21:17:25.852: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.550005ms May 28 21:17:27.856: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006577559s May 28 21:17:29.897: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.047817125s May 28 21:17:31.901: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.051777327s STEP: Saw pod success May 28 21:17:31.901: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 28 21:17:31.904: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 28 21:17:31.938: INFO: Waiting for pod pod-host-path-test to disappear May 28 21:17:31.943: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:17:31.943: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-5074" for this suite. • [SLOW TEST:6.200 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":35,"skipped":590,"failed":0} SSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:17:31.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 28 21:17:37.147: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:17:37.210: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-9739" for this suite. • [SLOW TEST:5.324 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":36,"skipped":594,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:17:37.274: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-1052, will wait for the garbage collector to delete the pods May 28 21:17:43.539: INFO: Deleting Job.batch foo took: 6.208022ms May 28 21:17:43.639: INFO: Terminating Job.batch foo pods took: 100.226558ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:18:19.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-1052" for this suite. • [SLOW TEST:42.277 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":37,"skipped":611,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:18:19.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:18:19.680: INFO: Create a RollingUpdate DaemonSet May 28 21:18:19.684: INFO: Check that daemon pods launch on every node of the cluster May 28 21:18:19.688: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:18:19.694: INFO: Number of nodes with available pods: 0 May 28 21:18:19.694: INFO: Node jerma-worker is running more than one daemon pod May 28 21:18:20.700: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:18:20.703: INFO: Number of nodes with available pods: 0 May 28 21:18:20.703: INFO: Node jerma-worker is running more than one daemon pod May 28 21:18:21.899: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:18:21.959: INFO: Number of nodes with available pods: 0 May 28 21:18:21.959: INFO: Node jerma-worker is running more than one daemon pod May 28 21:18:22.827: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:18:22.830: INFO: Number of nodes with available pods: 0 May 28 21:18:22.830: INFO: Node jerma-worker is running more than one daemon pod May 28 21:18:23.699: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:18:23.707: INFO: Number of nodes with available pods: 0 May 28 21:18:23.707: INFO: Node jerma-worker is running more than one daemon pod May 28 21:18:24.701: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:18:24.706: INFO: Number of nodes with available pods: 1 May 28 21:18:24.706: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:18:25.699: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:18:25.704: INFO: Number of nodes with available pods: 2 May 28 21:18:25.704: INFO: Number of running nodes: 2, number of available pods: 2 May 28 21:18:25.704: INFO: Update the DaemonSet to trigger a rollout May 28 21:18:25.711: INFO: Updating DaemonSet daemon-set May 28 21:18:30.745: INFO: Roll back the DaemonSet before rollout is complete May 28 21:18:30.751: INFO: Updating DaemonSet daemon-set May 28 21:18:30.751: INFO: Make sure DaemonSet rollback is complete May 28 21:18:30.777: INFO: Wrong image for pod: daemon-set-znmvj. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 28 21:18:30.777: INFO: Pod daemon-set-znmvj is not available May 28 21:18:30.894: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:18:31.941: INFO: Wrong image for pod: daemon-set-znmvj. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 28 21:18:31.941: INFO: Pod daemon-set-znmvj is not available May 28 21:18:32.003: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:18:32.899: INFO: Pod daemon-set-fnwhp is not available May 28 21:18:32.903: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6978, will wait for the garbage collector to delete the pods May 28 21:18:32.970: INFO: Deleting DaemonSet.extensions daemon-set took: 7.092917ms May 28 21:18:33.270: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.267274ms May 28 21:18:39.273: INFO: Number of nodes with available pods: 0 May 28 21:18:39.273: INFO: Number of running nodes: 0, number of available pods: 0 May 28 21:18:39.278: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6978/daemonsets","resourceVersion":"19896277"},"items":null} May 28 21:18:39.280: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6978/pods","resourceVersion":"19896277"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:18:39.294: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6978" for this suite. • [SLOW TEST:19.806 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":38,"skipped":624,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:18:39.358: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 28 21:18:39.418: INFO: Waiting up to 5m0s for pod "pod-a036b298-bb6f-4de2-b32f-871d5faf8f64" in namespace "emptydir-1717" to be "success or failure" May 28 21:18:39.421: INFO: Pod "pod-a036b298-bb6f-4de2-b32f-871d5faf8f64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.472002ms May 28 21:18:41.425: INFO: Pod "pod-a036b298-bb6f-4de2-b32f-871d5faf8f64": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006881194s May 28 21:18:43.456: INFO: Pod "pod-a036b298-bb6f-4de2-b32f-871d5faf8f64": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037727292s STEP: Saw pod success May 28 21:18:43.456: INFO: Pod "pod-a036b298-bb6f-4de2-b32f-871d5faf8f64" satisfied condition "success or failure" May 28 21:18:43.459: INFO: Trying to get logs from node jerma-worker pod pod-a036b298-bb6f-4de2-b32f-871d5faf8f64 container test-container: STEP: delete the pod May 28 21:18:43.672: INFO: Waiting for pod pod-a036b298-bb6f-4de2-b32f-871d5faf8f64 to disappear May 28 21:18:43.707: INFO: Pod pod-a036b298-bb6f-4de2-b32f-871d5faf8f64 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:18:43.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1717" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":39,"skipped":648,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:18:43.714: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:18:43.794: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-889b8f5f-838f-43fc-a0fa-3aca05a7b9c2" in namespace "security-context-test-3950" to be "success or failure" May 28 21:18:43.797: INFO: Pod "busybox-readonly-false-889b8f5f-838f-43fc-a0fa-3aca05a7b9c2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.045535ms May 28 21:18:45.800: INFO: Pod "busybox-readonly-false-889b8f5f-838f-43fc-a0fa-3aca05a7b9c2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006583397s May 28 21:18:47.805: INFO: Pod "busybox-readonly-false-889b8f5f-838f-43fc-a0fa-3aca05a7b9c2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011618175s May 28 21:18:47.805: INFO: Pod "busybox-readonly-false-889b8f5f-838f-43fc-a0fa-3aca05a7b9c2" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:18:47.805: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3950" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":658,"failed":0} SSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:18:47.815: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4895 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 28 21:18:47.914: INFO: Found 0 stateful pods, waiting for 3 May 28 21:18:57.919: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 28 21:18:57.919: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 28 21:18:57.919: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 28 21:19:07.919: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 28 21:19:07.919: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 28 21:19:07.919: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 28 21:19:07.947: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 28 21:19:18.003: INFO: Updating stateful set ss2 May 28 21:19:18.050: INFO: Waiting for Pod statefulset-4895/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 28 21:19:28.057: INFO: Waiting for Pod statefulset-4895/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 28 21:19:38.606: INFO: Found 2 stateful pods, waiting for 3 May 28 21:19:48.612: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 28 21:19:48.612: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 28 21:19:48.612: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 28 21:19:48.633: INFO: Updating stateful set ss2 May 28 21:19:48.661: INFO: Waiting for Pod statefulset-4895/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 28 21:19:58.670: INFO: Waiting for Pod statefulset-4895/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 28 21:20:08.688: INFO: Updating stateful set ss2 May 28 21:20:08.747: INFO: Waiting for StatefulSet statefulset-4895/ss2 to complete update May 28 21:20:08.747: INFO: Waiting for Pod statefulset-4895/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 28 21:20:18.754: INFO: Waiting for StatefulSet statefulset-4895/ss2 to complete update May 28 21:20:18.754: INFO: Waiting for Pod statefulset-4895/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 28 21:20:28.755: INFO: Deleting all statefulset in ns statefulset-4895 May 28 21:20:28.758: INFO: Scaling statefulset ss2 to 0 May 28 21:20:48.790: INFO: Waiting for statefulset status.replicas updated to 0 May 28 21:20:48.792: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:20:48.812: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4895" for this suite. • [SLOW TEST:121.004 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":41,"skipped":667,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:20:48.820: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-z7klg in namespace proxy-8442 I0528 21:20:48.968561 6 runners.go:189] Created replication controller with name: proxy-service-z7klg, namespace: proxy-8442, replica count: 1 I0528 21:20:50.019009 6 runners.go:189] proxy-service-z7klg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0528 21:20:51.019211 6 runners.go:189] proxy-service-z7klg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0528 21:20:52.019499 6 runners.go:189] proxy-service-z7klg Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0528 21:20:53.019710 6 runners.go:189] proxy-service-z7klg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0528 21:20:54.019912 6 runners.go:189] proxy-service-z7klg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0528 21:20:55.020122 6 runners.go:189] proxy-service-z7klg Pods: 1 out of 1 created, 0 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 1 runningButNotReady I0528 21:20:56.020347 6 runners.go:189] proxy-service-z7klg Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 28 21:20:56.024: INFO: setup took 7.147840391s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 28 21:20:56.031: INFO: (0) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 6.409639ms) May 28 21:20:56.031: INFO: (0) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:1080/proxy/: ... (200; 6.677393ms) May 28 21:20:56.031: INFO: (0) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:1080/proxy/: test<... (200; 6.863612ms) May 28 21:20:56.032: INFO: (0) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 7.313968ms) May 28 21:20:56.032: INFO: (0) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc/proxy/: test (200; 7.361463ms) May 28 21:20:56.032: INFO: (0) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname1/proxy/: foo (200; 8.136828ms) May 28 21:20:56.033: INFO: (0) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname2/proxy/: bar (200; 8.713725ms) May 28 21:20:56.037: INFO: (0) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname1/proxy/: foo (200; 13.086564ms) May 28 21:20:56.038: INFO: (0) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname2/proxy/: bar (200; 13.804469ms) May 28 21:20:56.040: INFO: (0) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 16.013771ms) May 28 21:20:56.040: INFO: (0) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 16.114266ms) May 28 21:20:56.044: INFO: (0) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 19.48643ms) May 28 21:20:56.044: INFO: (0) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname1/proxy/: tls baz (200; 19.467684ms) May 28 21:20:56.044: INFO: (0) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:462/proxy/: tls qux (200; 19.463967ms) May 28 21:20:56.044: INFO: (0) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname2/proxy/: tls qux (200; 19.636557ms) May 28 21:20:56.044: INFO: (0) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: ... (200; 6.96822ms) May 28 21:20:56.052: INFO: (1) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 7.557734ms) May 28 21:20:56.052: INFO: (1) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 7.611595ms) May 28 21:20:56.052: INFO: (1) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 7.631491ms) May 28 21:20:56.052: INFO: (1) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname2/proxy/: tls qux (200; 7.583447ms) May 28 21:20:56.052: INFO: (1) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc/proxy/: test (200; 7.675685ms) May 28 21:20:56.052: INFO: (1) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:462/proxy/: tls qux (200; 7.616841ms) May 28 21:20:56.052: INFO: (1) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:1080/proxy/: test<... (200; 7.587874ms) May 28 21:20:56.052: INFO: (1) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname1/proxy/: tls baz (200; 7.597948ms) May 28 21:20:56.054: INFO: (2) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 2.338641ms) May 28 21:20:56.055: INFO: (2) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:462/proxy/: tls qux (200; 3.198021ms) May 28 21:20:56.056: INFO: (2) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:1080/proxy/: ... (200; 3.57381ms) May 28 21:20:56.057: INFO: (2) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname2/proxy/: bar (200; 4.977235ms) May 28 21:20:56.057: INFO: (2) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 5.231487ms) May 28 21:20:56.058: INFO: (2) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname2/proxy/: bar (200; 5.6382ms) May 28 21:20:56.058: INFO: (2) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc/proxy/: test (200; 5.58681ms) May 28 21:20:56.058: INFO: (2) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:1080/proxy/: test<... (200; 5.62646ms) May 28 21:20:56.058: INFO: (2) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname1/proxy/: foo (200; 5.716751ms) May 28 21:20:56.058: INFO: (2) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname1/proxy/: foo (200; 5.67337ms) May 28 21:20:56.058: INFO: (2) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 5.700898ms) May 28 21:20:56.058: INFO: (2) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname1/proxy/: tls baz (200; 5.820338ms) May 28 21:20:56.058: INFO: (2) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 5.800569ms) May 28 21:20:56.058: INFO: (2) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: test<... (200; 3.831383ms) May 28 21:20:56.064: INFO: (3) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 5.617999ms) May 28 21:20:56.064: INFO: (3) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc/proxy/: test (200; 5.643183ms) May 28 21:20:56.064: INFO: (3) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:1080/proxy/: ... (200; 5.827259ms) May 28 21:20:56.064: INFO: (3) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:462/proxy/: tls qux (200; 5.847348ms) May 28 21:20:56.064: INFO: (3) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 6.129324ms) May 28 21:20:56.064: INFO: (3) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 6.146554ms) May 28 21:20:56.065: INFO: (3) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: test (200; 3.905979ms) May 28 21:20:56.071: INFO: (4) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 3.345291ms) May 28 21:20:56.071: INFO: (4) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:1080/proxy/: test<... (200; 2.903494ms) May 28 21:20:56.071: INFO: (4) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 4.421279ms) May 28 21:20:56.071: INFO: (4) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 3.973853ms) May 28 21:20:56.071: INFO: (4) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: ... (200; 4.086236ms) May 28 21:20:56.071: INFO: (4) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:462/proxy/: tls qux (200; 3.877763ms) May 28 21:20:56.071: INFO: (4) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 3.361253ms) May 28 21:20:56.071: INFO: (4) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname2/proxy/: bar (200; 4.276542ms) May 28 21:20:56.072: INFO: (4) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname1/proxy/: foo (200; 5.301281ms) May 28 21:20:56.072: INFO: (4) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname1/proxy/: tls baz (200; 5.253054ms) May 28 21:20:56.072: INFO: (4) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname2/proxy/: bar (200; 4.801698ms) May 28 21:20:56.072: INFO: (4) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname1/proxy/: foo (200; 4.661036ms) May 28 21:20:56.072: INFO: (4) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname2/proxy/: tls qux (200; 4.195772ms) May 28 21:20:56.076: INFO: (5) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 4.11749ms) May 28 21:20:56.076: INFO: (5) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname1/proxy/: foo (200; 3.994697ms) May 28 21:20:56.077: INFO: (5) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:1080/proxy/: test<... (200; 4.495189ms) May 28 21:20:56.077: INFO: (5) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname2/proxy/: tls qux (200; 4.544138ms) May 28 21:20:56.077: INFO: (5) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname2/proxy/: bar (200; 4.537093ms) May 28 21:20:56.077: INFO: (5) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname1/proxy/: foo (200; 4.84014ms) May 28 21:20:56.077: INFO: (5) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname1/proxy/: tls baz (200; 5.134003ms) May 28 21:20:56.078: INFO: (5) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 5.57484ms) May 28 21:20:56.078: INFO: (5) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:462/proxy/: tls qux (200; 5.584086ms) May 28 21:20:56.078: INFO: (5) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname2/proxy/: bar (200; 5.682785ms) May 28 21:20:56.078: INFO: (5) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 5.809407ms) May 28 21:20:56.078: INFO: (5) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:1080/proxy/: ... (200; 5.764928ms) May 28 21:20:56.078: INFO: (5) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc/proxy/: test (200; 5.80645ms) May 28 21:20:56.078: INFO: (5) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 5.843821ms) May 28 21:20:56.078: INFO: (5) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: ... (200; 5.230871ms) May 28 21:20:56.084: INFO: (6) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc/proxy/: test (200; 5.220636ms) May 28 21:20:56.084: INFO: (6) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: test<... (200; 5.250257ms) May 28 21:20:56.084: INFO: (6) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname1/proxy/: foo (200; 5.279965ms) May 28 21:20:56.084: INFO: (6) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 5.338026ms) May 28 21:20:56.084: INFO: (6) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname2/proxy/: bar (200; 5.348919ms) May 28 21:20:56.093: INFO: (7) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname2/proxy/: bar (200; 8.951029ms) May 28 21:20:56.093: INFO: (7) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:462/proxy/: tls qux (200; 9.082059ms) May 28 21:20:56.093: INFO: (7) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname2/proxy/: tls qux (200; 9.189503ms) May 28 21:20:56.093: INFO: (7) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname2/proxy/: bar (200; 9.236146ms) May 28 21:20:56.093: INFO: (7) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 9.463303ms) May 28 21:20:56.094: INFO: (7) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname1/proxy/: foo (200; 9.577815ms) May 28 21:20:56.094: INFO: (7) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 10.07506ms) May 28 21:20:56.094: INFO: (7) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 10.01174ms) May 28 21:20:56.094: INFO: (7) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 10.073125ms) May 28 21:20:56.094: INFO: (7) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc/proxy/: test (200; 10.0697ms) May 28 21:20:56.094: INFO: (7) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 10.141086ms) May 28 21:20:56.094: INFO: (7) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:1080/proxy/: test<... (200; 10.160193ms) May 28 21:20:56.094: INFO: (7) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: ... (200; 10.119254ms) May 28 21:20:56.098: INFO: (8) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:1080/proxy/: test<... (200; 3.980956ms) May 28 21:20:56.098: INFO: (8) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:1080/proxy/: ... (200; 3.988473ms) May 28 21:20:56.099: INFO: (8) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc/proxy/: test (200; 4.430569ms) May 28 21:20:56.099: INFO: (8) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 4.46837ms) May 28 21:20:56.099: INFO: (8) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 4.489333ms) May 28 21:20:56.099: INFO: (8) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 4.505337ms) May 28 21:20:56.099: INFO: (8) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: test<... (200; 6.431548ms) May 28 21:20:56.106: INFO: (9) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:1080/proxy/: ... (200; 6.479907ms) May 28 21:20:56.106: INFO: (9) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 6.457604ms) May 28 21:20:56.106: INFO: (9) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 6.543652ms) May 28 21:20:56.106: INFO: (9) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc/proxy/: test (200; 6.530769ms) May 28 21:20:56.106: INFO: (9) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 6.455084ms) May 28 21:20:56.107: INFO: (9) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname1/proxy/: tls baz (200; 7.505943ms) May 28 21:20:56.107: INFO: (9) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname2/proxy/: bar (200; 7.529672ms) May 28 21:20:56.107: INFO: (9) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname2/proxy/: bar (200; 7.660683ms) May 28 21:20:56.107: INFO: (9) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname1/proxy/: foo (200; 7.612104ms) May 28 21:20:56.107: INFO: (9) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname1/proxy/: foo (200; 7.643716ms) May 28 21:20:56.107: INFO: (9) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname2/proxy/: tls qux (200; 7.7761ms) May 28 21:20:56.110: INFO: (10) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 2.659987ms) May 28 21:20:56.111: INFO: (10) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:1080/proxy/: test<... (200; 3.046093ms) May 28 21:20:56.111: INFO: (10) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc/proxy/: test (200; 3.067246ms) May 28 21:20:56.111: INFO: (10) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 3.104606ms) May 28 21:20:56.111: INFO: (10) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 3.662667ms) May 28 21:20:56.111: INFO: (10) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 3.836022ms) May 28 21:20:56.111: INFO: (10) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname1/proxy/: foo (200; 3.869804ms) May 28 21:20:56.112: INFO: (10) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:1080/proxy/: ... (200; 3.978508ms) May 28 21:20:56.112: INFO: (10) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname1/proxy/: tls baz (200; 4.315882ms) May 28 21:20:56.112: INFO: (10) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname2/proxy/: bar (200; 4.291ms) May 28 21:20:56.112: INFO: (10) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname1/proxy/: foo (200; 4.281813ms) May 28 21:20:56.112: INFO: (10) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 4.284479ms) May 28 21:20:56.112: INFO: (10) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname2/proxy/: tls qux (200; 4.278007ms) May 28 21:20:56.112: INFO: (10) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname2/proxy/: bar (200; 4.283488ms) May 28 21:20:56.112: INFO: (10) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:462/proxy/: tls qux (200; 4.400813ms) May 28 21:20:56.112: INFO: (10) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: test (200; 4.706914ms) May 28 21:20:56.117: INFO: (11) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname2/proxy/: bar (200; 4.807709ms) May 28 21:20:56.117: INFO: (11) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 4.813876ms) May 28 21:20:56.117: INFO: (11) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname1/proxy/: foo (200; 4.860153ms) May 28 21:20:56.117: INFO: (11) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 5.019703ms) May 28 21:20:56.117: INFO: (11) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 5.094257ms) May 28 21:20:56.117: INFO: (11) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: test<... (200; 5.489652ms) May 28 21:20:56.118: INFO: (11) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:1080/proxy/: ... (200; 5.538314ms) May 28 21:20:56.118: INFO: (11) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 5.54266ms) May 28 21:20:56.121: INFO: (12) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 3.613015ms) May 28 21:20:56.121: INFO: (12) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc/proxy/: test (200; 3.520081ms) May 28 21:20:56.121: INFO: (12) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 3.631219ms) May 28 21:20:56.121: INFO: (12) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 3.569785ms) May 28 21:20:56.121: INFO: (12) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 3.632149ms) May 28 21:20:56.121: INFO: (12) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:1080/proxy/: ... (200; 3.60972ms) May 28 21:20:56.121: INFO: (12) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: test<... (200; 3.726098ms) May 28 21:20:56.121: INFO: (12) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:462/proxy/: tls qux (200; 3.727004ms) May 28 21:20:56.122: INFO: (12) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname2/proxy/: bar (200; 4.108464ms) May 28 21:20:56.122: INFO: (12) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname2/proxy/: bar (200; 4.552453ms) May 28 21:20:56.122: INFO: (12) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname1/proxy/: foo (200; 4.696771ms) May 28 21:20:56.123: INFO: (12) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname1/proxy/: foo (200; 4.829395ms) May 28 21:20:56.123: INFO: (12) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname1/proxy/: tls baz (200; 4.832273ms) May 28 21:20:56.123: INFO: (12) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname2/proxy/: tls qux (200; 4.888416ms) May 28 21:20:56.125: INFO: (13) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 2.077423ms) May 28 21:20:56.125: INFO: (13) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: ... (200; 3.597665ms) May 28 21:20:56.126: INFO: (13) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 3.686443ms) May 28 21:20:56.126: INFO: (13) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:1080/proxy/: test<... (200; 3.64404ms) May 28 21:20:56.127: INFO: (13) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 4.000578ms) May 28 21:20:56.127: INFO: (13) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc/proxy/: test (200; 3.964449ms) May 28 21:20:56.127: INFO: (13) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:462/proxy/: tls qux (200; 4.412158ms) May 28 21:20:56.127: INFO: (13) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname1/proxy/: foo (200; 4.445814ms) May 28 21:20:56.128: INFO: (13) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname1/proxy/: foo (200; 5.194332ms) May 28 21:20:56.128: INFO: (13) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname2/proxy/: bar (200; 5.196095ms) May 28 21:20:56.128: INFO: (13) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname2/proxy/: bar (200; 5.313702ms) May 28 21:20:56.128: INFO: (13) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname1/proxy/: tls baz (200; 5.328261ms) May 28 21:20:56.128: INFO: (13) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname2/proxy/: tls qux (200; 5.427193ms) May 28 21:20:56.131: INFO: (14) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 2.842954ms) May 28 21:20:56.131: INFO: (14) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:462/proxy/: tls qux (200; 2.698443ms) May 28 21:20:56.131: INFO: (14) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 2.852986ms) May 28 21:20:56.133: INFO: (14) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:1080/proxy/: test<... (200; 3.892421ms) May 28 21:20:56.133: INFO: (14) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 4.103487ms) May 28 21:20:56.133: INFO: (14) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: test (200; 5.408564ms) May 28 21:20:56.134: INFO: (14) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname2/proxy/: bar (200; 5.329786ms) May 28 21:20:56.134: INFO: (14) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:1080/proxy/: ... (200; 5.80919ms) May 28 21:20:56.134: INFO: (14) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 5.287712ms) May 28 21:20:56.135: INFO: (14) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 5.701442ms) May 28 21:20:56.135: INFO: (14) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname2/proxy/: bar (200; 6.057497ms) May 28 21:20:56.135: INFO: (14) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname1/proxy/: foo (200; 5.954485ms) May 28 21:20:56.135: INFO: (14) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname1/proxy/: foo (200; 6.513774ms) May 28 21:20:56.135: INFO: (14) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname1/proxy/: tls baz (200; 6.711297ms) May 28 21:20:56.136: INFO: (14) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname2/proxy/: tls qux (200; 7.033488ms) May 28 21:20:56.140: INFO: (15) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 3.384114ms) May 28 21:20:56.140: INFO: (15) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:1080/proxy/: ... (200; 3.631034ms) May 28 21:20:56.140: INFO: (15) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: test<... (200; 4.393281ms) May 28 21:20:56.141: INFO: (15) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:462/proxy/: tls qux (200; 4.312119ms) May 28 21:20:56.141: INFO: (15) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 4.333966ms) May 28 21:20:56.141: INFO: (15) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 4.357701ms) May 28 21:20:56.141: INFO: (15) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname1/proxy/: foo (200; 4.406085ms) May 28 21:20:56.141: INFO: (15) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc/proxy/: test (200; 4.358816ms) May 28 21:20:56.141: INFO: (15) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 4.368265ms) May 28 21:20:56.141: INFO: (15) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname2/proxy/: bar (200; 4.578595ms) May 28 21:20:56.141: INFO: (15) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname1/proxy/: foo (200; 4.587625ms) May 28 21:20:56.141: INFO: (15) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname2/proxy/: tls qux (200; 4.755505ms) May 28 21:20:56.144: INFO: (15) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname2/proxy/: bar (200; 7.588125ms) May 28 21:20:56.148: INFO: (16) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 3.356146ms) May 28 21:20:56.148: INFO: (16) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 3.94005ms) May 28 21:20:56.148: INFO: (16) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 4.248153ms) May 28 21:20:56.149: INFO: (16) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:1080/proxy/: ... (200; 4.773504ms) May 28 21:20:56.149: INFO: (16) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 5.290374ms) May 28 21:20:56.150: INFO: (16) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc/proxy/: test (200; 5.333125ms) May 28 21:20:56.150: INFO: (16) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:462/proxy/: tls qux (200; 5.625671ms) May 28 21:20:56.150: INFO: (16) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname1/proxy/: tls baz (200; 5.762133ms) May 28 21:20:56.150: INFO: (16) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 5.701996ms) May 28 21:20:56.150: INFO: (16) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: test<... (200; 5.726903ms) May 28 21:20:56.150: INFO: (16) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname1/proxy/: foo (200; 5.825234ms) May 28 21:20:56.150: INFO: (16) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname2/proxy/: bar (200; 5.874722ms) May 28 21:20:56.150: INFO: (16) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname1/proxy/: foo (200; 5.862546ms) May 28 21:20:56.152: INFO: (17) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 1.863412ms) May 28 21:20:56.154: INFO: (17) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 3.482404ms) May 28 21:20:56.155: INFO: (17) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:1080/proxy/: test<... (200; 5.010027ms) May 28 21:20:56.155: INFO: (17) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc/proxy/: test (200; 5.141422ms) May 28 21:20:56.156: INFO: (17) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 5.377219ms) May 28 21:20:56.156: INFO: (17) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:1080/proxy/: ... (200; 5.67107ms) May 28 21:20:56.156: INFO: (17) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 5.859205ms) May 28 21:20:56.157: INFO: (17) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:462/proxy/: tls qux (200; 6.684547ms) May 28 21:20:56.157: INFO: (17) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname1/proxy/: foo (200; 6.675944ms) May 28 21:20:56.157: INFO: (17) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname2/proxy/: bar (200; 6.594489ms) May 28 21:20:56.157: INFO: (17) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: test (200; 8.542051ms) May 28 21:20:56.166: INFO: (18) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:1080/proxy/: test<... (200; 8.63996ms) May 28 21:20:56.166: INFO: (18) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:1080/proxy/: ... (200; 8.670455ms) May 28 21:20:56.166: INFO: (18) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: test<... (200; 2.290578ms) May 28 21:20:56.170: INFO: (19) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:443/proxy/: test (200; 3.308274ms) May 28 21:20:56.171: INFO: (19) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 3.394424ms) May 28 21:20:56.172: INFO: (19) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:460/proxy/: tls baz (200; 3.642051ms) May 28 21:20:56.172: INFO: (19) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 3.594042ms) May 28 21:20:56.172: INFO: (19) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:162/proxy/: bar (200; 3.63914ms) May 28 21:20:56.172: INFO: (19) /api/v1/namespaces/proxy-8442/pods/https:proxy-service-z7klg-8w8mc:462/proxy/: tls qux (200; 3.750927ms) May 28 21:20:56.172: INFO: (19) /api/v1/namespaces/proxy-8442/pods/http:proxy-service-z7klg-8w8mc:1080/proxy/: ... (200; 3.835026ms) May 28 21:20:56.172: INFO: (19) /api/v1/namespaces/proxy-8442/pods/proxy-service-z7klg-8w8mc:160/proxy/: foo (200; 3.99519ms) May 28 21:20:56.173: INFO: (19) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname2/proxy/: bar (200; 5.011858ms) May 28 21:20:56.173: INFO: (19) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname1/proxy/: tls baz (200; 5.158324ms) May 28 21:20:56.173: INFO: (19) /api/v1/namespaces/proxy-8442/services/proxy-service-z7klg:portname1/proxy/: foo (200; 5.123474ms) May 28 21:20:56.173: INFO: (19) /api/v1/namespaces/proxy-8442/services/https:proxy-service-z7klg:tlsportname2/proxy/: tls qux (200; 5.188217ms) May 28 21:20:56.173: INFO: (19) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname1/proxy/: foo (200; 5.133796ms) May 28 21:20:56.173: INFO: (19) /api/v1/namespaces/proxy-8442/services/http:proxy-service-z7klg:portname2/proxy/: bar (200; 5.147465ms) STEP: deleting ReplicationController proxy-service-z7klg in namespace proxy-8442, will wait for the garbage collector to delete the pods May 28 21:20:56.250: INFO: Deleting ReplicationController proxy-service-z7klg took: 24.953453ms May 28 21:20:56.350: INFO: Terminating ReplicationController proxy-service-z7klg pods took: 100.241669ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:21:09.350: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-8442" for this suite. • [SLOW TEST:20.541 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":42,"skipped":695,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:21:09.362: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-ff4b0026-712e-44c9-96c7-2b2442c22e9f STEP: Creating a pod to test consume configMaps May 28 21:21:09.473: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-6914c631-49e2-427f-b9a6-09dd5a619e8e" in namespace "projected-3693" to be "success or failure" May 28 21:21:09.477: INFO: Pod "pod-projected-configmaps-6914c631-49e2-427f-b9a6-09dd5a619e8e": Phase="Pending", Reason="", readiness=false. Elapsed: 4.122808ms May 28 21:21:11.481: INFO: Pod "pod-projected-configmaps-6914c631-49e2-427f-b9a6-09dd5a619e8e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008223085s May 28 21:21:13.486: INFO: Pod "pod-projected-configmaps-6914c631-49e2-427f-b9a6-09dd5a619e8e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012732267s STEP: Saw pod success May 28 21:21:13.486: INFO: Pod "pod-projected-configmaps-6914c631-49e2-427f-b9a6-09dd5a619e8e" satisfied condition "success or failure" May 28 21:21:13.488: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-6914c631-49e2-427f-b9a6-09dd5a619e8e container projected-configmap-volume-test: STEP: delete the pod May 28 21:21:13.657: INFO: Waiting for pod pod-projected-configmaps-6914c631-49e2-427f-b9a6-09dd5a619e8e to disappear May 28 21:21:13.669: INFO: Pod pod-projected-configmaps-6914c631-49e2-427f-b9a6-09dd5a619e8e no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:21:13.669: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3693" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":43,"skipped":726,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:21:13.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1875.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-1875.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1875.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-1875.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-1875.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1875.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 28 21:21:21.932: INFO: DNS probes using dns-1875/dns-test-8ee96f26-8408-4e3d-8791-edb43dc16d92 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:21:22.526: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1875" for this suite. • [SLOW TEST:9.010 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":44,"skipped":736,"failed":0} SS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:21:22.685: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 21:21:22.779: INFO: Waiting up to 5m0s for pod "downwardapi-volume-dc6b083a-47e1-4af2-809c-0dcfb60bd503" in namespace "downward-api-4490" to be "success or failure" May 28 21:21:22.823: INFO: Pod "downwardapi-volume-dc6b083a-47e1-4af2-809c-0dcfb60bd503": Phase="Pending", Reason="", readiness=false. Elapsed: 43.698568ms May 28 21:21:24.850: INFO: Pod "downwardapi-volume-dc6b083a-47e1-4af2-809c-0dcfb60bd503": Phase="Pending", Reason="", readiness=false. Elapsed: 2.070347475s May 28 21:21:26.854: INFO: Pod "downwardapi-volume-dc6b083a-47e1-4af2-809c-0dcfb60bd503": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074709384s STEP: Saw pod success May 28 21:21:26.854: INFO: Pod "downwardapi-volume-dc6b083a-47e1-4af2-809c-0dcfb60bd503" satisfied condition "success or failure" May 28 21:21:26.857: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-dc6b083a-47e1-4af2-809c-0dcfb60bd503 container client-container: STEP: delete the pod May 28 21:21:26.932: INFO: Waiting for pod downwardapi-volume-dc6b083a-47e1-4af2-809c-0dcfb60bd503 to disappear May 28 21:21:26.945: INFO: Pod downwardapi-volume-dc6b083a-47e1-4af2-809c-0dcfb60bd503 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:21:26.945: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4490" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":45,"skipped":738,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:21:26.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 28 21:21:31.091: INFO: Pod pod-hostip-b238f4a1-2c43-4336-81b0-01b1f2460817 has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:21:31.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2697" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":46,"skipped":752,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:21:31.099: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:21:31.190: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 28 21:21:34.130: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7636 create -f -' May 28 21:21:38.198: INFO: stderr: "" May 28 21:21:38.198: INFO: stdout: "e2e-test-crd-publish-openapi-4973-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 28 21:21:38.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7636 delete e2e-test-crd-publish-openapi-4973-crds test-cr' May 28 21:21:38.333: INFO: stderr: "" May 28 21:21:38.333: INFO: stdout: "e2e-test-crd-publish-openapi-4973-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 28 21:21:38.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7636 apply -f -' May 28 21:21:38.662: INFO: stderr: "" May 28 21:21:38.662: INFO: stdout: "e2e-test-crd-publish-openapi-4973-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 28 21:21:38.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7636 delete e2e-test-crd-publish-openapi-4973-crds test-cr' May 28 21:21:38.819: INFO: stderr: "" May 28 21:21:38.820: INFO: stdout: "e2e-test-crd-publish-openapi-4973-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 28 21:21:38.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-4973-crds' May 28 21:21:39.625: INFO: stderr: "" May 28 21:21:39.626: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-4973-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:21:42.536: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7636" for this suite. • [SLOW TEST:11.444 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":47,"skipped":764,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:21:42.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 28 21:21:42.642: INFO: Waiting up to 5m0s for pod "pod-1c678193-ea73-473e-a939-f7112e94f850" in namespace "emptydir-6410" to be "success or failure" May 28 21:21:42.645: INFO: Pod "pod-1c678193-ea73-473e-a939-f7112e94f850": Phase="Pending", Reason="", readiness=false. Elapsed: 3.266275ms May 28 21:21:44.650: INFO: Pod "pod-1c678193-ea73-473e-a939-f7112e94f850": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008035048s May 28 21:21:46.661: INFO: Pod "pod-1c678193-ea73-473e-a939-f7112e94f850": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019408214s STEP: Saw pod success May 28 21:21:46.661: INFO: Pod "pod-1c678193-ea73-473e-a939-f7112e94f850" satisfied condition "success or failure" May 28 21:21:46.664: INFO: Trying to get logs from node jerma-worker2 pod pod-1c678193-ea73-473e-a939-f7112e94f850 container test-container: STEP: delete the pod May 28 21:21:46.687: INFO: Waiting for pod pod-1c678193-ea73-473e-a939-f7112e94f850 to disappear May 28 21:21:46.693: INFO: Pod pod-1c678193-ea73-473e-a939-f7112e94f850 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:21:46.693: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6410" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":48,"skipped":810,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:21:46.734: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:21:46.810: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:21:47.993: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-4102" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":49,"skipped":837,"failed":0} SSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:21:48.002: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 28 21:21:54.930: INFO: 0 pods remaining May 28 21:21:54.931: INFO: 0 pods has nil DeletionTimestamp May 28 21:21:54.931: INFO: STEP: Gathering metrics W0528 21:21:56.490559 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 28 21:21:56.490: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:21:56.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-7222" for this suite. • [SLOW TEST:9.208 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":50,"skipped":840,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:21:57.211: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 28 21:21:58.295: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 28 21:21:58.427: INFO: Waiting for terminating namespaces to be deleted... May 28 21:21:58.430: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 28 21:21:58.435: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 21:21:58.435: INFO: Container kindnet-cni ready: true, restart count 2 May 28 21:21:58.435: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 21:21:58.435: INFO: Container kube-proxy ready: true, restart count 0 May 28 21:21:58.435: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 28 21:21:58.475: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 21:21:58.475: INFO: Container kindnet-cni ready: true, restart count 2 May 28 21:21:58.475: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 28 21:21:58.475: INFO: Container kube-bench ready: false, restart count 0 May 28 21:21:58.475: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 21:21:58.475: INFO: Container kube-proxy ready: true, restart count 0 May 28 21:21:58.475: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 28 21:21:58.475: INFO: Container kube-hunter ready: false, restart count 0 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 28 21:21:59.316: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 28 21:21:59.316: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 28 21:21:59.316: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 28 21:21:59.316: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 28 21:21:59.316: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker May 28 21:21:59.620: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-3fa5feff-98f3-4b8f-b5fc-b395b3292692.16134e0f01252736], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4372/filler-pod-3fa5feff-98f3-4b8f-b5fc-b395b3292692 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-3fa5feff-98f3-4b8f-b5fc-b395b3292692.16134e0f5df6bdf2], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-3fa5feff-98f3-4b8f-b5fc-b395b3292692.16134e0fab382b72], Reason = [Created], Message = [Created container filler-pod-3fa5feff-98f3-4b8f-b5fc-b395b3292692] STEP: Considering event: Type = [Normal], Name = [filler-pod-3fa5feff-98f3-4b8f-b5fc-b395b3292692.16134e0fbef07d20], Reason = [Started], Message = [Started container filler-pod-3fa5feff-98f3-4b8f-b5fc-b395b3292692] STEP: Considering event: Type = [Normal], Name = [filler-pod-c37ba341-4b78-41a5-a642-ea924781e33a.16134e0f0bba90f0], Reason = [Scheduled], Message = [Successfully assigned sched-pred-4372/filler-pod-c37ba341-4b78-41a5-a642-ea924781e33a to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-c37ba341-4b78-41a5-a642-ea924781e33a.16134e0f88cdf8bc], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-c37ba341-4b78-41a5-a642-ea924781e33a.16134e0fc115c681], Reason = [Created], Message = [Created container filler-pod-c37ba341-4b78-41a5-a642-ea924781e33a] STEP: Considering event: Type = [Normal], Name = [filler-pod-c37ba341-4b78-41a5-a642-ea924781e33a.16134e0fd41935d8], Reason = [Started], Message = [Started container filler-pod-c37ba341-4b78-41a5-a642-ea924781e33a] STEP: Considering event: Type = [Warning], Name = [additional-pod.16134e0ffbbcbba8], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: Considering event: Type = [Warning], Name = [additional-pod.16134e0ffddfee36], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:22:05.141: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-4372" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:7.939 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":51,"skipped":859,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:22:05.150: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-4a9f9fa5-8388-41c0-8629-5b75e160bb49 STEP: Creating a pod to test consume configMaps May 28 21:22:05.916: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-47abb0b1-dc96-485b-b6f1-3a20481c0e06" in namespace "projected-3637" to be "success or failure" May 28 21:22:05.980: INFO: Pod "pod-projected-configmaps-47abb0b1-dc96-485b-b6f1-3a20481c0e06": Phase="Pending", Reason="", readiness=false. Elapsed: 63.747622ms May 28 21:22:08.039: INFO: Pod "pod-projected-configmaps-47abb0b1-dc96-485b-b6f1-3a20481c0e06": Phase="Pending", Reason="", readiness=false. Elapsed: 2.122580716s May 28 21:22:10.043: INFO: Pod "pod-projected-configmaps-47abb0b1-dc96-485b-b6f1-3a20481c0e06": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.126454318s STEP: Saw pod success May 28 21:22:10.043: INFO: Pod "pod-projected-configmaps-47abb0b1-dc96-485b-b6f1-3a20481c0e06" satisfied condition "success or failure" May 28 21:22:10.045: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-47abb0b1-dc96-485b-b6f1-3a20481c0e06 container projected-configmap-volume-test: STEP: delete the pod May 28 21:22:10.067: INFO: Waiting for pod pod-projected-configmaps-47abb0b1-dc96-485b-b6f1-3a20481c0e06 to disappear May 28 21:22:10.072: INFO: Pod pod-projected-configmaps-47abb0b1-dc96-485b-b6f1-3a20481c0e06 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:22:10.072: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3637" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":870,"failed":0} S ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:22:10.079: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-2bf08e57-db8d-4893-9261-26887d597501 STEP: Creating a pod to test consume configMaps May 28 21:22:10.183: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-7ddaeeeb-e04c-4aff-947d-fdfe0ba0fda4" in namespace "projected-7474" to be "success or failure" May 28 21:22:10.210: INFO: Pod "pod-projected-configmaps-7ddaeeeb-e04c-4aff-947d-fdfe0ba0fda4": Phase="Pending", Reason="", readiness=false. Elapsed: 26.834022ms May 28 21:22:12.279: INFO: Pod "pod-projected-configmaps-7ddaeeeb-e04c-4aff-947d-fdfe0ba0fda4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095932243s May 28 21:22:14.284: INFO: Pod "pod-projected-configmaps-7ddaeeeb-e04c-4aff-947d-fdfe0ba0fda4": Phase="Running", Reason="", readiness=true. Elapsed: 4.101485521s May 28 21:22:16.289: INFO: Pod "pod-projected-configmaps-7ddaeeeb-e04c-4aff-947d-fdfe0ba0fda4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.105539923s STEP: Saw pod success May 28 21:22:16.289: INFO: Pod "pod-projected-configmaps-7ddaeeeb-e04c-4aff-947d-fdfe0ba0fda4" satisfied condition "success or failure" May 28 21:22:16.291: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-7ddaeeeb-e04c-4aff-947d-fdfe0ba0fda4 container projected-configmap-volume-test: STEP: delete the pod May 28 21:22:16.313: INFO: Waiting for pod pod-projected-configmaps-7ddaeeeb-e04c-4aff-947d-fdfe0ba0fda4 to disappear May 28 21:22:16.318: INFO: Pod pod-projected-configmaps-7ddaeeeb-e04c-4aff-947d-fdfe0ba0fda4 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:22:16.318: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7474" for this suite. • [SLOW TEST:6.246 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":871,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:22:16.325: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-aa28d8b2-5a0f-4283-a0a4-80e1df32a4a1 STEP: Creating a pod to test consume secrets May 28 21:22:16.452: INFO: Waiting up to 5m0s for pod "pod-secrets-063f1f66-4811-4370-92cd-6a2ae12c0160" in namespace "secrets-6017" to be "success or failure" May 28 21:22:16.455: INFO: Pod "pod-secrets-063f1f66-4811-4370-92cd-6a2ae12c0160": Phase="Pending", Reason="", readiness=false. Elapsed: 3.25612ms May 28 21:22:18.459: INFO: Pod "pod-secrets-063f1f66-4811-4370-92cd-6a2ae12c0160": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006535735s May 28 21:22:20.463: INFO: Pod "pod-secrets-063f1f66-4811-4370-92cd-6a2ae12c0160": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010812532s STEP: Saw pod success May 28 21:22:20.463: INFO: Pod "pod-secrets-063f1f66-4811-4370-92cd-6a2ae12c0160" satisfied condition "success or failure" May 28 21:22:20.466: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-063f1f66-4811-4370-92cd-6a2ae12c0160 container secret-volume-test: STEP: delete the pod May 28 21:22:20.488: INFO: Waiting for pod pod-secrets-063f1f66-4811-4370-92cd-6a2ae12c0160 to disappear May 28 21:22:20.491: INFO: Pod pod-secrets-063f1f66-4811-4370-92cd-6a2ae12c0160 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:22:20.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6017" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":54,"skipped":874,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:22:20.499: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 28 21:22:21.140: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 28 21:22:23.149: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297741, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297741, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297741, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297741, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 21:22:25.154: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297741, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297741, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297741, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297741, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 21:22:28.215: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:22:28.219: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:22:29.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-7112" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:9.054 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":55,"skipped":916,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:22:29.554: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-3112 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3112 to expose endpoints map[] May 28 21:22:29.696: INFO: Get endpoints failed (16.976541ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 28 21:22:30.700: INFO: successfully validated that service endpoint-test2 in namespace services-3112 exposes endpoints map[] (1.020411418s elapsed) STEP: Creating pod pod1 in namespace services-3112 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3112 to expose endpoints map[pod1:[80]] May 28 21:22:33.779: INFO: successfully validated that service endpoint-test2 in namespace services-3112 exposes endpoints map[pod1:[80]] (3.073558653s elapsed) STEP: Creating pod pod2 in namespace services-3112 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3112 to expose endpoints map[pod1:[80] pod2:[80]] May 28 21:22:36.896: INFO: successfully validated that service endpoint-test2 in namespace services-3112 exposes endpoints map[pod1:[80] pod2:[80]] (3.112923864s elapsed) STEP: Deleting pod pod1 in namespace services-3112 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3112 to expose endpoints map[pod2:[80]] May 28 21:22:37.943: INFO: successfully validated that service endpoint-test2 in namespace services-3112 exposes endpoints map[pod2:[80]] (1.042142304s elapsed) STEP: Deleting pod pod2 in namespace services-3112 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-3112 to expose endpoints map[] May 28 21:22:39.046: INFO: successfully validated that service endpoint-test2 in namespace services-3112 exposes endpoints map[] (1.098942767s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:22:39.188: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3112" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.668 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":56,"skipped":928,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:22:39.222: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:22:43.316: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6159" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":57,"skipped":945,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:22:43.324: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 28 21:22:47.435: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:22:47.453: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-6346" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":946,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:22:47.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 28 21:22:47.560: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3178 /api/v1/namespaces/watch-3178/configmaps/e2e-watch-test-watch-closed 3f9572b7-f686-42f3-a72c-83fca823b9f6 19897997 0 2020-05-28 21:22:47 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 28 21:22:47.560: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3178 /api/v1/namespaces/watch-3178/configmaps/e2e-watch-test-watch-closed 3f9572b7-f686-42f3-a72c-83fca823b9f6 19897998 0 2020-05-28 21:22:47 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 28 21:22:47.597: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3178 /api/v1/namespaces/watch-3178/configmaps/e2e-watch-test-watch-closed 3f9572b7-f686-42f3-a72c-83fca823b9f6 19897999 0 2020-05-28 21:22:47 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 28 21:22:47.597: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-3178 /api/v1/namespaces/watch-3178/configmaps/e2e-watch-test-watch-closed 3f9572b7-f686-42f3-a72c-83fca823b9f6 19898000 0 2020-05-28 21:22:47 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:22:47.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3178" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":59,"skipped":947,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:22:47.631: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:22:47.724: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-752bec08-d44f-4118-b3ae-878975b1ca82" in namespace "security-context-test-778" to be "success or failure" May 28 21:22:47.727: INFO: Pod "busybox-privileged-false-752bec08-d44f-4118-b3ae-878975b1ca82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.986486ms May 28 21:22:49.764: INFO: Pod "busybox-privileged-false-752bec08-d44f-4118-b3ae-878975b1ca82": Phase="Pending", Reason="", readiness=false. Elapsed: 2.040016459s May 28 21:22:51.769: INFO: Pod "busybox-privileged-false-752bec08-d44f-4118-b3ae-878975b1ca82": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.045528496s May 28 21:22:51.769: INFO: Pod "busybox-privileged-false-752bec08-d44f-4118-b3ae-878975b1ca82" satisfied condition "success or failure" May 28 21:22:51.776: INFO: Got logs for pod "busybox-privileged-false-752bec08-d44f-4118-b3ae-878975b1ca82": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:22:51.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-778" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":60,"skipped":963,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:22:51.783: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 28 21:22:51.831: INFO: Waiting up to 5m0s for pod "downward-api-cbe6b568-67ae-481e-a88c-7385732ff994" in namespace "downward-api-7699" to be "success or failure" May 28 21:22:51.836: INFO: Pod "downward-api-cbe6b568-67ae-481e-a88c-7385732ff994": Phase="Pending", Reason="", readiness=false. Elapsed: 4.29674ms May 28 21:22:53.840: INFO: Pod "downward-api-cbe6b568-67ae-481e-a88c-7385732ff994": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0084618s May 28 21:22:55.844: INFO: Pod "downward-api-cbe6b568-67ae-481e-a88c-7385732ff994": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012570837s STEP: Saw pod success May 28 21:22:55.844: INFO: Pod "downward-api-cbe6b568-67ae-481e-a88c-7385732ff994" satisfied condition "success or failure" May 28 21:22:55.847: INFO: Trying to get logs from node jerma-worker pod downward-api-cbe6b568-67ae-481e-a88c-7385732ff994 container dapi-container: STEP: delete the pod May 28 21:22:55.894: INFO: Waiting for pod downward-api-cbe6b568-67ae-481e-a88c-7385732ff994 to disappear May 28 21:22:55.907: INFO: Pod downward-api-cbe6b568-67ae-481e-a88c-7385732ff994 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:22:55.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7699" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":969,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:22:55.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:22:56.299: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5857" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":992,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:22:56.313: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:23:01.021: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3473" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":63,"skipped":999,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:23:01.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:23:06.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-7458" for this suite. • [SLOW TEST:5.153 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":64,"skipped":1022,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:23:06.273: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 21:23:07.127: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 21:23:09.138: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297787, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297787, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297787, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297787, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 21:23:11.144: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297787, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297787, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297787, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726297787, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 21:23:14.220: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:23:14.323: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4418" for this suite. STEP: Destroying namespace "webhook-4418-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.151 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":65,"skipped":1031,"failed":0} SSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:23:14.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 28 21:23:14.532: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 28 21:23:14.544: INFO: Waiting for terminating namespaces to be deleted... May 28 21:23:14.547: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 28 21:23:14.552: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 21:23:14.552: INFO: Container kindnet-cni ready: true, restart count 2 May 28 21:23:14.552: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 21:23:14.552: INFO: Container kube-proxy ready: true, restart count 0 May 28 21:23:14.552: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 28 21:23:14.559: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 21:23:14.559: INFO: Container kindnet-cni ready: true, restart count 2 May 28 21:23:14.559: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 28 21:23:14.559: INFO: Container kube-bench ready: false, restart count 0 May 28 21:23:14.559: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 21:23:14.559: INFO: Container kube-proxy ready: true, restart count 0 May 28 21:23:14.559: INFO: sample-webhook-deployment-5f65f8c764-ghfz2 from webhook-4418 started at 2020-05-28 21:23:07 +0000 UTC (1 container statuses recorded) May 28 21:23:14.559: INFO: Container sample-webhook ready: true, restart count 0 May 28 21:23:14.559: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 28 21:23:14.559: INFO: Container kube-hunter ready: false, restart count 0 May 28 21:23:14.559: INFO: busybox-scheduling-e0eaf3ab-cdf9-4344-9cda-44a628e97fe8 from kubelet-test-6159 started at 2020-05-28 21:22:39 +0000 UTC (1 container statuses recorded) May 28 21:23:14.559: INFO: Container busybox-scheduling-e0eaf3ab-cdf9-4344-9cda-44a628e97fe8 ready: true, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-4878bb15-b06a-411f-a2ec-df1dfb9088b9 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-4878bb15-b06a-411f-a2ec-df1dfb9088b9 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-4878bb15-b06a-411f-a2ec-df1dfb9088b9 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:28:23.382: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1268" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.965 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":66,"skipped":1037,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:28:23.389: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 28 21:28:23.604: INFO: Waiting up to 5m0s for pod "client-containers-0fd03a7f-2ece-4666-8951-1f4ada417c3a" in namespace "containers-6066" to be "success or failure" May 28 21:28:23.615: INFO: Pod "client-containers-0fd03a7f-2ece-4666-8951-1f4ada417c3a": Phase="Pending", Reason="", readiness=false. Elapsed: 10.934303ms May 28 21:28:25.650: INFO: Pod "client-containers-0fd03a7f-2ece-4666-8951-1f4ada417c3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.046044579s May 28 21:28:27.654: INFO: Pod "client-containers-0fd03a7f-2ece-4666-8951-1f4ada417c3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.050407097s STEP: Saw pod success May 28 21:28:27.654: INFO: Pod "client-containers-0fd03a7f-2ece-4666-8951-1f4ada417c3a" satisfied condition "success or failure" May 28 21:28:27.658: INFO: Trying to get logs from node jerma-worker pod client-containers-0fd03a7f-2ece-4666-8951-1f4ada417c3a container test-container: STEP: delete the pod May 28 21:28:27.862: INFO: Waiting for pod client-containers-0fd03a7f-2ece-4666-8951-1f4ada417c3a to disappear May 28 21:28:27.883: INFO: Pod client-containers-0fd03a7f-2ece-4666-8951-1f4ada417c3a no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:28:27.883: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6066" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":67,"skipped":1054,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:28:27.891: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9549.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9549.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 28 21:28:34.149: INFO: DNS probes using dns-9549/dns-test-a7da7a64-993a-4711-a11f-5c180620964d succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:28:34.170: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9549" for this suite. • [SLOW TEST:6.378 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":68,"skipped":1101,"failed":0} [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:28:34.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 28 21:28:34.779: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:28:34.783: INFO: Number of nodes with available pods: 0 May 28 21:28:34.783: INFO: Node jerma-worker is running more than one daemon pod May 28 21:28:35.789: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:28:35.793: INFO: Number of nodes with available pods: 0 May 28 21:28:35.793: INFO: Node jerma-worker is running more than one daemon pod May 28 21:28:36.789: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:28:36.794: INFO: Number of nodes with available pods: 0 May 28 21:28:36.794: INFO: Node jerma-worker is running more than one daemon pod May 28 21:28:37.789: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:28:37.792: INFO: Number of nodes with available pods: 0 May 28 21:28:37.792: INFO: Node jerma-worker is running more than one daemon pod May 28 21:28:38.788: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:28:38.792: INFO: Number of nodes with available pods: 0 May 28 21:28:38.792: INFO: Node jerma-worker is running more than one daemon pod May 28 21:28:39.788: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:28:39.791: INFO: Number of nodes with available pods: 2 May 28 21:28:39.791: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 28 21:28:39.830: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:28:39.834: INFO: Number of nodes with available pods: 1 May 28 21:28:39.834: INFO: Node jerma-worker is running more than one daemon pod May 28 21:28:40.839: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:28:40.842: INFO: Number of nodes with available pods: 1 May 28 21:28:40.842: INFO: Node jerma-worker is running more than one daemon pod May 28 21:28:41.839: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:28:41.845: INFO: Number of nodes with available pods: 1 May 28 21:28:41.845: INFO: Node jerma-worker is running more than one daemon pod May 28 21:28:42.839: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:28:42.842: INFO: Number of nodes with available pods: 1 May 28 21:28:42.842: INFO: Node jerma-worker is running more than one daemon pod May 28 21:28:43.841: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:28:43.844: INFO: Number of nodes with available pods: 1 May 28 21:28:43.844: INFO: Node jerma-worker is running more than one daemon pod May 28 21:28:44.872: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:28:44.875: INFO: Number of nodes with available pods: 1 May 28 21:28:44.875: INFO: Node jerma-worker is running more than one daemon pod May 28 21:28:45.838: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:28:45.842: INFO: Number of nodes with available pods: 1 May 28 21:28:45.842: INFO: Node jerma-worker is running more than one daemon pod May 28 21:28:46.839: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:28:46.842: INFO: Number of nodes with available pods: 2 May 28 21:28:46.842: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7652, will wait for the garbage collector to delete the pods May 28 21:28:46.905: INFO: Deleting DaemonSet.extensions daemon-set took: 6.490316ms May 28 21:28:47.205: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.464785ms May 28 21:28:59.609: INFO: Number of nodes with available pods: 0 May 28 21:28:59.609: INFO: Number of running nodes: 0, number of available pods: 0 May 28 21:28:59.631: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7652/daemonsets","resourceVersion":"19899565"},"items":null} May 28 21:28:59.634: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7652/pods","resourceVersion":"19899565"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:28:59.643: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7652" for this suite. • [SLOW TEST:25.380 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":69,"skipped":1101,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:28:59.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 28 21:29:04.297: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-706 pod-service-account-ea37f250-40d5-40b1-b261-b5fcb4c39634 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 28 21:29:04.548: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-706 pod-service-account-ea37f250-40d5-40b1-b261-b5fcb4c39634 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 28 21:29:04.762: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-706 pod-service-account-ea37f250-40d5-40b1-b261-b5fcb4c39634 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:29:04.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-706" for this suite. • [SLOW TEST:5.321 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":70,"skipped":1129,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:29:04.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:29:05.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-7017" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":71,"skipped":1141,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:29:05.246: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 28 21:29:05.321: INFO: Waiting up to 5m0s for pod "pod-2f2fdda5-e168-4646-ae53-b9c2f417390f" in namespace "emptydir-8639" to be "success or failure" May 28 21:29:05.323: INFO: Pod "pod-2f2fdda5-e168-4646-ae53-b9c2f417390f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.611955ms May 28 21:29:07.409: INFO: Pod "pod-2f2fdda5-e168-4646-ae53-b9c2f417390f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.088587487s May 28 21:29:09.412: INFO: Pod "pod-2f2fdda5-e168-4646-ae53-b9c2f417390f": Phase="Running", Reason="", readiness=true. Elapsed: 4.091510268s May 28 21:29:11.417: INFO: Pod "pod-2f2fdda5-e168-4646-ae53-b9c2f417390f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.096095681s STEP: Saw pod success May 28 21:29:11.417: INFO: Pod "pod-2f2fdda5-e168-4646-ae53-b9c2f417390f" satisfied condition "success or failure" May 28 21:29:11.420: INFO: Trying to get logs from node jerma-worker pod pod-2f2fdda5-e168-4646-ae53-b9c2f417390f container test-container: STEP: delete the pod May 28 21:29:11.456: INFO: Waiting for pod pod-2f2fdda5-e168-4646-ae53-b9c2f417390f to disappear May 28 21:29:11.466: INFO: Pod pod-2f2fdda5-e168-4646-ae53-b9c2f417390f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:29:11.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8639" for this suite. • [SLOW TEST:6.226 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1175,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:29:11.473: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 28 21:29:11.617: INFO: Waiting up to 5m0s for pod "pod-4de06c12-7df1-4094-a8e0-ba9757904985" in namespace "emptydir-922" to be "success or failure" May 28 21:29:11.627: INFO: Pod "pod-4de06c12-7df1-4094-a8e0-ba9757904985": Phase="Pending", Reason="", readiness=false. Elapsed: 10.168555ms May 28 21:29:13.631: INFO: Pod "pod-4de06c12-7df1-4094-a8e0-ba9757904985": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014438882s May 28 21:29:15.636: INFO: Pod "pod-4de06c12-7df1-4094-a8e0-ba9757904985": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018753084s STEP: Saw pod success May 28 21:29:15.636: INFO: Pod "pod-4de06c12-7df1-4094-a8e0-ba9757904985" satisfied condition "success or failure" May 28 21:29:15.639: INFO: Trying to get logs from node jerma-worker pod pod-4de06c12-7df1-4094-a8e0-ba9757904985 container test-container: STEP: delete the pod May 28 21:29:15.652: INFO: Waiting for pod pod-4de06c12-7df1-4094-a8e0-ba9757904985 to disappear May 28 21:29:15.657: INFO: Pod pod-4de06c12-7df1-4094-a8e0-ba9757904985 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:29:15.657: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-922" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1203,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:29:15.663: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 21:29:16.557: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 21:29:18.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298156, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298156, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298156, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298156, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 21:29:20.598: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298156, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298156, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298156, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298156, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 21:29:23.657: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API May 28 21:29:23.716: INFO: Waiting for webhook configuration to be ready... STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 28 21:29:27.875: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-1389 to-be-attached-pod -i -c=container1' May 28 21:29:27.995: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:29:28.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1389" for this suite. STEP: Destroying namespace "webhook-1389-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:12.452 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":74,"skipped":1224,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:29:28.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 21:29:29.238: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 21:29:31.253: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298169, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298169, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298169, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298169, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 21:29:33.263: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298169, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298169, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298169, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298169, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 21:29:36.282: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:29:36.774: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2530" for this suite. STEP: Destroying namespace "webhook-2530-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.782 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":75,"skipped":1255,"failed":0} SSS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:29:36.898: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:29:36.974: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 28 21:29:36.995: INFO: Pod name sample-pod: Found 0 pods out of 1 May 28 21:29:42.034: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 28 21:29:42.034: INFO: Creating deployment "test-rolling-update-deployment" May 28 21:29:42.044: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 28 21:29:42.052: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 28 21:29:44.058: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 28 21:29:44.060: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298182, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298182, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298182, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298182, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 21:29:46.184: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 28 21:29:46.206: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-6505 /apis/apps/v1/namespaces/deployment-6505/deployments/test-rolling-update-deployment e21f0fef-7ddd-4e34-b7b6-625c98dc21cf 19900013 1 2020-05-28 21:29:42 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00376c998 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-28 21:29:42 +0000 UTC,LastTransitionTime:2020-05-28 21:29:42 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-28 21:29:45 +0000 UTC,LastTransitionTime:2020-05-28 21:29:42 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 28 21:29:46.253: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-6505 /apis/apps/v1/namespaces/deployment-6505/replicasets/test-rolling-update-deployment-67cf4f6444 a71e7d1e-cc1f-4819-8bbe-d698a1d1a072 19900002 1 2020-05-28 21:29:42 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment e21f0fef-7ddd-4e34-b7b6-625c98dc21cf 0xc005af6c67 0xc005af6c68}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc005af6cd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 28 21:29:46.253: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 28 21:29:46.253: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-6505 /apis/apps/v1/namespaces/deployment-6505/replicasets/test-rolling-update-controller 56e5f98d-d0e7-46ca-970d-99ac03e7f0f5 19900011 2 2020-05-28 21:29:36 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment e21f0fef-7ddd-4e34-b7b6-625c98dc21cf 0xc005af6b97 0xc005af6b98}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc005af6bf8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 28 21:29:46.256: INFO: Pod "test-rolling-update-deployment-67cf4f6444-hw6db" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-hw6db test-rolling-update-deployment-67cf4f6444- deployment-6505 /api/v1/namespaces/deployment-6505/pods/test-rolling-update-deployment-67cf4f6444-hw6db bc482eff-7b5d-42b4-8d7e-7aa9e95a4596 19900001 0 2020-05-28 21:29:42 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 a71e7d1e-cc1f-4819-8bbe-d698a1d1a072 0xc005af7117 0xc005af7118}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b8j6h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b8j6h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b8j6h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 21:29:42 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 21:29:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 21:29:45 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 21:29:42 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.9,StartTime:2020-05-28 21:29:42 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-28 21:29:45 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://8cdce48fd8f2ae55ffb676f12ce79e519a1bb5eb94bf01c32b00e77329e8efb7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.9,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:29:46.256: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6505" for this suite. • [SLOW TEST:9.365 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":76,"skipped":1258,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:29:46.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:29:46.388: INFO: Waiting up to 5m0s for pod "busybox-user-65534-afb2346a-2e45-42d8-9b4e-e330f3909fde" in namespace "security-context-test-1888" to be "success or failure" May 28 21:29:46.604: INFO: Pod "busybox-user-65534-afb2346a-2e45-42d8-9b4e-e330f3909fde": Phase="Pending", Reason="", readiness=false. Elapsed: 215.96616ms May 28 21:29:48.610: INFO: Pod "busybox-user-65534-afb2346a-2e45-42d8-9b4e-e330f3909fde": Phase="Pending", Reason="", readiness=false. Elapsed: 2.221671032s May 28 21:29:50.617: INFO: Pod "busybox-user-65534-afb2346a-2e45-42d8-9b4e-e330f3909fde": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.228891979s May 28 21:29:50.617: INFO: Pod "busybox-user-65534-afb2346a-2e45-42d8-9b4e-e330f3909fde" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:29:50.617: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1888" for this suite. •{"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":77,"skipped":1292,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:29:50.625: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-32bb8252-ff3b-4625-bfc5-ca30d7a6773d STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-32bb8252-ff3b-4625-bfc5-ca30d7a6773d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:31:15.184: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6327" for this suite. • [SLOW TEST:84.568 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1344,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:31:15.194: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-b3145249-8991-4e70-b3f8-c9708d9ca6f3 STEP: Creating secret with name secret-projected-all-test-volume-9e1d3fe0-352f-48c2-98cf-22aac022a08a STEP: Creating a pod to test Check all projections for projected volume plugin May 28 21:31:15.288: INFO: Waiting up to 5m0s for pod "projected-volume-0f75e980-bfd5-43e1-bf52-bb11b8482470" in namespace "projected-6863" to be "success or failure" May 28 21:31:15.323: INFO: Pod "projected-volume-0f75e980-bfd5-43e1-bf52-bb11b8482470": Phase="Pending", Reason="", readiness=false. Elapsed: 34.605811ms May 28 21:31:17.472: INFO: Pod "projected-volume-0f75e980-bfd5-43e1-bf52-bb11b8482470": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183975691s May 28 21:31:19.476: INFO: Pod "projected-volume-0f75e980-bfd5-43e1-bf52-bb11b8482470": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.187846269s STEP: Saw pod success May 28 21:31:19.476: INFO: Pod "projected-volume-0f75e980-bfd5-43e1-bf52-bb11b8482470" satisfied condition "success or failure" May 28 21:31:19.478: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-0f75e980-bfd5-43e1-bf52-bb11b8482470 container projected-all-volume-test: STEP: delete the pod May 28 21:31:19.523: INFO: Waiting for pod projected-volume-0f75e980-bfd5-43e1-bf52-bb11b8482470 to disappear May 28 21:31:19.531: INFO: Pod projected-volume-0f75e980-bfd5-43e1-bf52-bb11b8482470 no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:31:19.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6863" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1361,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:31:19.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:31:35.863: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-6097" for this suite. • [SLOW TEST:16.333 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":80,"skipped":1368,"failed":0} SSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:31:35.872: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 28 21:31:35.966: INFO: Waiting up to 5m0s for pod "client-containers-dd382fe1-9fd1-4bd0-9b12-c837ee96188b" in namespace "containers-9302" to be "success or failure" May 28 21:31:35.979: INFO: Pod "client-containers-dd382fe1-9fd1-4bd0-9b12-c837ee96188b": Phase="Pending", Reason="", readiness=false. Elapsed: 12.385836ms May 28 21:31:37.983: INFO: Pod "client-containers-dd382fe1-9fd1-4bd0-9b12-c837ee96188b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.016927309s May 28 21:31:39.988: INFO: Pod "client-containers-dd382fe1-9fd1-4bd0-9b12-c837ee96188b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021659537s STEP: Saw pod success May 28 21:31:39.988: INFO: Pod "client-containers-dd382fe1-9fd1-4bd0-9b12-c837ee96188b" satisfied condition "success or failure" May 28 21:31:39.991: INFO: Trying to get logs from node jerma-worker2 pod client-containers-dd382fe1-9fd1-4bd0-9b12-c837ee96188b container test-container: STEP: delete the pod May 28 21:31:40.115: INFO: Waiting for pod client-containers-dd382fe1-9fd1-4bd0-9b12-c837ee96188b to disappear May 28 21:31:40.135: INFO: Pod client-containers-dd382fe1-9fd1-4bd0-9b12-c837ee96188b no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:31:40.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-9302" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":81,"skipped":1375,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:31:40.142: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-5bbf095f-3146-4d40-a500-57eba0a2675c in namespace container-probe-4097 May 28 21:31:44.261: INFO: Started pod test-webserver-5bbf095f-3146-4d40-a500-57eba0a2675c in namespace container-probe-4097 STEP: checking the pod's current state and verifying that restartCount is present May 28 21:31:44.264: INFO: Initial restart count of pod test-webserver-5bbf095f-3146-4d40-a500-57eba0a2675c is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:35:45.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-4097" for this suite. • [SLOW TEST:245.021 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":82,"skipped":1389,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:35:45.164: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 28 21:35:45.325: INFO: Waiting up to 5m0s for pod "pod-2565c693-b0a1-4af6-9956-6ced414fc3df" in namespace "emptydir-9138" to be "success or failure" May 28 21:35:45.360: INFO: Pod "pod-2565c693-b0a1-4af6-9956-6ced414fc3df": Phase="Pending", Reason="", readiness=false. Elapsed: 34.703155ms May 28 21:35:47.364: INFO: Pod "pod-2565c693-b0a1-4af6-9956-6ced414fc3df": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038599262s May 28 21:35:49.368: INFO: Pod "pod-2565c693-b0a1-4af6-9956-6ced414fc3df": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.042383597s STEP: Saw pod success May 28 21:35:49.368: INFO: Pod "pod-2565c693-b0a1-4af6-9956-6ced414fc3df" satisfied condition "success or failure" May 28 21:35:49.371: INFO: Trying to get logs from node jerma-worker2 pod pod-2565c693-b0a1-4af6-9956-6ced414fc3df container test-container: STEP: delete the pod May 28 21:35:49.418: INFO: Waiting for pod pod-2565c693-b0a1-4af6-9956-6ced414fc3df to disappear May 28 21:35:49.426: INFO: Pod pod-2565c693-b0a1-4af6-9956-6ced414fc3df no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:35:49.426: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9138" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":83,"skipped":1416,"failed":0} SS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:35:49.432: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-208 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-208 STEP: Deleting pre-stop pod May 28 21:36:02.746: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:36:02.751: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-208" for this suite. • [SLOW TEST:13.355 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":84,"skipped":1418,"failed":0} SSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:36:02.787: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-d3eac936-df6a-4110-bfa2-f38cd25394a2 STEP: Creating a pod to test consume configMaps May 28 21:36:03.016: INFO: Waiting up to 5m0s for pod "pod-configmaps-79a5139f-8947-4941-9bb5-b9dd6f1eef87" in namespace "configmap-9946" to be "success or failure" May 28 21:36:03.036: INFO: Pod "pod-configmaps-79a5139f-8947-4941-9bb5-b9dd6f1eef87": Phase="Pending", Reason="", readiness=false. Elapsed: 20.057579ms May 28 21:36:05.095: INFO: Pod "pod-configmaps-79a5139f-8947-4941-9bb5-b9dd6f1eef87": Phase="Pending", Reason="", readiness=false. Elapsed: 2.078597329s May 28 21:36:07.099: INFO: Pod "pod-configmaps-79a5139f-8947-4941-9bb5-b9dd6f1eef87": Phase="Running", Reason="", readiness=true. Elapsed: 4.082819703s May 28 21:36:09.103: INFO: Pod "pod-configmaps-79a5139f-8947-4941-9bb5-b9dd6f1eef87": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.087173574s STEP: Saw pod success May 28 21:36:09.103: INFO: Pod "pod-configmaps-79a5139f-8947-4941-9bb5-b9dd6f1eef87" satisfied condition "success or failure" May 28 21:36:09.106: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-79a5139f-8947-4941-9bb5-b9dd6f1eef87 container configmap-volume-test: STEP: delete the pod May 28 21:36:09.146: INFO: Waiting for pod pod-configmaps-79a5139f-8947-4941-9bb5-b9dd6f1eef87 to disappear May 28 21:36:09.150: INFO: Pod pod-configmaps-79a5139f-8947-4941-9bb5-b9dd6f1eef87 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:36:09.150: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9946" for this suite. • [SLOW TEST:6.369 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1425,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:36:09.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 28 21:36:09.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-4296' May 28 21:36:12.795: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 28 21:36:12.795: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 28 21:36:12.822: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-7vb58] May 28 21:36:12.822: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-7vb58" in namespace "kubectl-4296" to be "running and ready" May 28 21:36:12.846: INFO: Pod "e2e-test-httpd-rc-7vb58": Phase="Pending", Reason="", readiness=false. Elapsed: 23.469466ms May 28 21:36:14.938: INFO: Pod "e2e-test-httpd-rc-7vb58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.116171293s May 28 21:36:16.943: INFO: Pod "e2e-test-httpd-rc-7vb58": Phase="Running", Reason="", readiness=true. Elapsed: 4.12056115s May 28 21:36:16.943: INFO: Pod "e2e-test-httpd-rc-7vb58" satisfied condition "running and ready" May 28 21:36:16.943: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-7vb58] May 28 21:36:16.943: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-4296' May 28 21:36:17.083: INFO: stderr: "" May 28 21:36:17.083: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.16. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.16. Set the 'ServerName' directive globally to suppress this message\n[Thu May 28 21:36:15.550274 2020] [mpm_event:notice] [pid 1:tid 140102084074344] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Thu May 28 21:36:15.550324 2020] [core:notice] [pid 1:tid 140102084074344] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 28 21:36:17.083: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-4296' May 28 21:36:17.190: INFO: stderr: "" May 28 21:36:17.190: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:36:17.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4296" for this suite. • [SLOW TEST:8.040 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":86,"skipped":1433,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:36:17.197: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 28 21:36:17.752: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 28 21:36:19.763: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298577, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298577, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298577, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298577, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 21:36:22.824: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:36:22.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:36:24.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-8690" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:7.217 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":87,"skipped":1436,"failed":0} SSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:36:24.414: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:36:24.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-8801" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":88,"skipped":1443,"failed":0} SS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:36:25.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 21:36:25.997: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 21:36:28.157: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298585, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298585, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298586, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298585, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 21:36:30.220: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298585, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298585, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298586, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298585, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 21:36:33.195: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:36:33.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6224-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:36:33.922: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6621" for this suite. STEP: Destroying namespace "webhook-6621-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.000 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":89,"skipped":1445,"failed":0} SSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:36:34.040: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 28 21:36:38.150: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:36:38.195: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2789" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":90,"skipped":1449,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:36:38.204: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:36:42.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-242" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":91,"skipped":1485,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:36:42.317: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 28 21:36:46.935: INFO: Successfully updated pod "labelsupdate45464512-7e1b-4545-a20a-40f5aea79f5f" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:36:48.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1707" for this suite. • [SLOW TEST:6.679 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1502,"failed":0} S ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:36:48.996: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-5cbb3295-3fa0-4230-8729-220290cea0a9 STEP: Creating a pod to test consume configMaps May 28 21:36:49.094: INFO: Waiting up to 5m0s for pod "pod-configmaps-806c5e2f-4b0f-4edf-a0c2-996d3b16f9d4" in namespace "configmap-8658" to be "success or failure" May 28 21:36:49.098: INFO: Pod "pod-configmaps-806c5e2f-4b0f-4edf-a0c2-996d3b16f9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.072299ms May 28 21:36:51.106: INFO: Pod "pod-configmaps-806c5e2f-4b0f-4edf-a0c2-996d3b16f9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.011941661s May 28 21:36:53.179: INFO: Pod "pod-configmaps-806c5e2f-4b0f-4edf-a0c2-996d3b16f9d4": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084414962s May 28 21:36:55.183: INFO: Pod "pod-configmaps-806c5e2f-4b0f-4edf-a0c2-996d3b16f9d4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.08848793s STEP: Saw pod success May 28 21:36:55.183: INFO: Pod "pod-configmaps-806c5e2f-4b0f-4edf-a0c2-996d3b16f9d4" satisfied condition "success or failure" May 28 21:36:55.186: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-806c5e2f-4b0f-4edf-a0c2-996d3b16f9d4 container configmap-volume-test: STEP: delete the pod May 28 21:36:55.258: INFO: Waiting for pod pod-configmaps-806c5e2f-4b0f-4edf-a0c2-996d3b16f9d4 to disappear May 28 21:36:55.267: INFO: Pod pod-configmaps-806c5e2f-4b0f-4edf-a0c2-996d3b16f9d4 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:36:55.267: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8658" for this suite. • [SLOW TEST:6.320 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1503,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:36:55.316: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 28 21:37:03.471: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 28 21:37:03.526: INFO: Pod pod-with-prestop-http-hook still exists May 28 21:37:05.527: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 28 21:37:05.538: INFO: Pod pod-with-prestop-http-hook still exists May 28 21:37:07.527: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 28 21:37:07.550: INFO: Pod pod-with-prestop-http-hook still exists May 28 21:37:09.527: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 28 21:37:09.533: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:37:09.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-5442" for this suite. • [SLOW TEST:14.233 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":94,"skipped":1524,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:37:09.550: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 28 21:37:09.615: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-2924' May 28 21:37:09.723: INFO: stderr: "" May 28 21:37:09.723: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 28 21:37:09.727: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-2924' May 28 21:37:14.469: INFO: stderr: "" May 28 21:37:14.469: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:37:14.469: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2924" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":95,"skipped":1545,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:37:14.482: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 21:37:15.431: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 21:37:17.438: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298635, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298635, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298635, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298635, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 21:37:20.515: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:37:20.558: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4585" for this suite. STEP: Destroying namespace "webhook-4585-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.240 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":96,"skipped":1581,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:37:20.723: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2767 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-2767 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2767 May 28 21:37:20.843: INFO: Found 0 stateful pods, waiting for 1 May 28 21:37:30.848: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 28 21:37:30.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2767 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 28 21:37:31.152: INFO: stderr: "I0528 21:37:31.001759 1563 log.go:172] (0xc00002b130) (0xc000b720a0) Create stream\nI0528 21:37:31.001818 1563 log.go:172] (0xc00002b130) (0xc000b720a0) Stream added, broadcasting: 1\nI0528 21:37:31.005285 1563 log.go:172] (0xc00002b130) Reply frame received for 1\nI0528 21:37:31.005352 1563 log.go:172] (0xc00002b130) (0xc0005f46e0) Create stream\nI0528 21:37:31.005385 1563 log.go:172] (0xc00002b130) (0xc0005f46e0) Stream added, broadcasting: 3\nI0528 21:37:31.006319 1563 log.go:172] (0xc00002b130) Reply frame received for 3\nI0528 21:37:31.006355 1563 log.go:172] (0xc00002b130) (0xc00063bae0) Create stream\nI0528 21:37:31.006371 1563 log.go:172] (0xc00002b130) (0xc00063bae0) Stream added, broadcasting: 5\nI0528 21:37:31.007273 1563 log.go:172] (0xc00002b130) Reply frame received for 5\nI0528 21:37:31.099418 1563 log.go:172] (0xc00002b130) Data frame received for 5\nI0528 21:37:31.099445 1563 log.go:172] (0xc00063bae0) (5) Data frame handling\nI0528 21:37:31.099460 1563 log.go:172] (0xc00063bae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0528 21:37:31.142847 1563 log.go:172] (0xc00002b130) Data frame received for 3\nI0528 21:37:31.142891 1563 log.go:172] (0xc0005f46e0) (3) Data frame handling\nI0528 21:37:31.143038 1563 log.go:172] (0xc0005f46e0) (3) Data frame sent\nI0528 21:37:31.143065 1563 log.go:172] (0xc00002b130) Data frame received for 3\nI0528 21:37:31.143083 1563 log.go:172] (0xc0005f46e0) (3) Data frame handling\nI0528 21:37:31.143514 1563 log.go:172] (0xc00002b130) Data frame received for 5\nI0528 21:37:31.143529 1563 log.go:172] (0xc00063bae0) (5) Data frame handling\nI0528 21:37:31.145414 1563 log.go:172] (0xc00002b130) Data frame received for 1\nI0528 21:37:31.145437 1563 log.go:172] (0xc000b720a0) (1) Data frame handling\nI0528 21:37:31.145457 1563 log.go:172] (0xc000b720a0) (1) Data frame sent\nI0528 21:37:31.145472 1563 log.go:172] (0xc00002b130) (0xc000b720a0) Stream removed, broadcasting: 1\nI0528 21:37:31.145490 1563 log.go:172] (0xc00002b130) Go away received\nI0528 21:37:31.145984 1563 log.go:172] (0xc00002b130) (0xc000b720a0) Stream removed, broadcasting: 1\nI0528 21:37:31.146027 1563 log.go:172] (0xc00002b130) (0xc0005f46e0) Stream removed, broadcasting: 3\nI0528 21:37:31.146054 1563 log.go:172] (0xc00002b130) (0xc00063bae0) Stream removed, broadcasting: 5\n" May 28 21:37:31.152: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 28 21:37:31.152: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 28 21:37:31.157: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 28 21:37:41.162: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 28 21:37:41.162: INFO: Waiting for statefulset status.replicas updated to 0 May 28 21:37:41.178: INFO: POD NODE PHASE GRACE CONDITIONS May 28 21:37:41.178: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:31 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:20 +0000 UTC }] May 28 21:37:41.178: INFO: May 28 21:37:41.178: INFO: StatefulSet ss has not reached scale 3, at 1 May 28 21:37:42.216: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.993654507s May 28 21:37:43.275: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.956283798s May 28 21:37:44.278: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.896977048s May 28 21:37:45.283: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.893465043s May 28 21:37:46.287: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.888893267s May 28 21:37:47.292: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.885040623s May 28 21:37:48.311: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.880258727s May 28 21:37:49.316: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.860843718s May 28 21:37:50.322: INFO: Verifying statefulset ss doesn't scale past 3 for another 855.74103ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2767 May 28 21:37:51.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2767 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 28 21:37:51.569: INFO: stderr: "I0528 21:37:51.465059 1584 log.go:172] (0xc00009adc0) (0xc0006d1c20) Create stream\nI0528 21:37:51.465269 1584 log.go:172] (0xc00009adc0) (0xc0006d1c20) Stream added, broadcasting: 1\nI0528 21:37:51.468162 1584 log.go:172] (0xc00009adc0) Reply frame received for 1\nI0528 21:37:51.468203 1584 log.go:172] (0xc00009adc0) (0xc0006d1e00) Create stream\nI0528 21:37:51.468216 1584 log.go:172] (0xc00009adc0) (0xc0006d1e00) Stream added, broadcasting: 3\nI0528 21:37:51.469003 1584 log.go:172] (0xc00009adc0) Reply frame received for 3\nI0528 21:37:51.469051 1584 log.go:172] (0xc00009adc0) (0xc000b14000) Create stream\nI0528 21:37:51.469062 1584 log.go:172] (0xc00009adc0) (0xc000b14000) Stream added, broadcasting: 5\nI0528 21:37:51.470112 1584 log.go:172] (0xc00009adc0) Reply frame received for 5\nI0528 21:37:51.562672 1584 log.go:172] (0xc00009adc0) Data frame received for 5\nI0528 21:37:51.562702 1584 log.go:172] (0xc000b14000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0528 21:37:51.562755 1584 log.go:172] (0xc00009adc0) Data frame received for 3\nI0528 21:37:51.562785 1584 log.go:172] (0xc0006d1e00) (3) Data frame handling\nI0528 21:37:51.562815 1584 log.go:172] (0xc0006d1e00) (3) Data frame sent\nI0528 21:37:51.562827 1584 log.go:172] (0xc00009adc0) Data frame received for 3\nI0528 21:37:51.562837 1584 log.go:172] (0xc0006d1e00) (3) Data frame handling\nI0528 21:37:51.562876 1584 log.go:172] (0xc000b14000) (5) Data frame sent\nI0528 21:37:51.562897 1584 log.go:172] (0xc00009adc0) Data frame received for 5\nI0528 21:37:51.562907 1584 log.go:172] (0xc000b14000) (5) Data frame handling\nI0528 21:37:51.564172 1584 log.go:172] (0xc00009adc0) Data frame received for 1\nI0528 21:37:51.564185 1584 log.go:172] (0xc0006d1c20) (1) Data frame handling\nI0528 21:37:51.564192 1584 log.go:172] (0xc0006d1c20) (1) Data frame sent\nI0528 21:37:51.564204 1584 log.go:172] (0xc00009adc0) (0xc0006d1c20) Stream removed, broadcasting: 1\nI0528 21:37:51.564229 1584 log.go:172] (0xc00009adc0) Go away received\nI0528 21:37:51.564444 1584 log.go:172] (0xc00009adc0) (0xc0006d1c20) Stream removed, broadcasting: 1\nI0528 21:37:51.564457 1584 log.go:172] (0xc00009adc0) (0xc0006d1e00) Stream removed, broadcasting: 3\nI0528 21:37:51.564465 1584 log.go:172] (0xc00009adc0) (0xc000b14000) Stream removed, broadcasting: 5\n" May 28 21:37:51.569: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 28 21:37:51.569: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 28 21:37:51.569: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2767 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 28 21:37:51.808: INFO: stderr: "I0528 21:37:51.712982 1605 log.go:172] (0xc00098b340) (0xc0009985a0) Create stream\nI0528 21:37:51.713053 1605 log.go:172] (0xc00098b340) (0xc0009985a0) Stream added, broadcasting: 1\nI0528 21:37:51.718525 1605 log.go:172] (0xc00098b340) Reply frame received for 1\nI0528 21:37:51.718572 1605 log.go:172] (0xc00098b340) (0xc00081dc20) Create stream\nI0528 21:37:51.718582 1605 log.go:172] (0xc00098b340) (0xc00081dc20) Stream added, broadcasting: 3\nI0528 21:37:51.719509 1605 log.go:172] (0xc00098b340) Reply frame received for 3\nI0528 21:37:51.719545 1605 log.go:172] (0xc00098b340) (0xc0006ba820) Create stream\nI0528 21:37:51.719557 1605 log.go:172] (0xc00098b340) (0xc0006ba820) Stream added, broadcasting: 5\nI0528 21:37:51.720304 1605 log.go:172] (0xc00098b340) Reply frame received for 5\nI0528 21:37:51.793824 1605 log.go:172] (0xc00098b340) Data frame received for 5\nI0528 21:37:51.793866 1605 log.go:172] (0xc0006ba820) (5) Data frame handling\nI0528 21:37:51.793893 1605 log.go:172] (0xc0006ba820) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0528 21:37:51.801036 1605 log.go:172] (0xc00098b340) Data frame received for 3\nI0528 21:37:51.801085 1605 log.go:172] (0xc00081dc20) (3) Data frame handling\nI0528 21:37:51.801212 1605 log.go:172] (0xc00081dc20) (3) Data frame sent\nI0528 21:37:51.801533 1605 log.go:172] (0xc00098b340) Data frame received for 5\nI0528 21:37:51.801561 1605 log.go:172] (0xc0006ba820) (5) Data frame handling\nI0528 21:37:51.801687 1605 log.go:172] (0xc0006ba820) (5) Data frame sent\nI0528 21:37:51.801698 1605 log.go:172] (0xc00098b340) Data frame received for 5\nI0528 21:37:51.801703 1605 log.go:172] (0xc0006ba820) (5) Data frame handling\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0528 21:37:51.801717 1605 log.go:172] (0xc0006ba820) (5) Data frame sent\nI0528 21:37:51.801820 1605 log.go:172] (0xc00098b340) Data frame received for 5\nI0528 21:37:51.801829 1605 log.go:172] (0xc0006ba820) (5) Data frame handling\nI0528 21:37:51.801866 1605 log.go:172] (0xc00098b340) Data frame received for 3\nI0528 21:37:51.801886 1605 log.go:172] (0xc00081dc20) (3) Data frame handling\nI0528 21:37:51.803516 1605 log.go:172] (0xc00098b340) Data frame received for 1\nI0528 21:37:51.803528 1605 log.go:172] (0xc0009985a0) (1) Data frame handling\nI0528 21:37:51.803534 1605 log.go:172] (0xc0009985a0) (1) Data frame sent\nI0528 21:37:51.803608 1605 log.go:172] (0xc00098b340) (0xc0009985a0) Stream removed, broadcasting: 1\nI0528 21:37:51.803645 1605 log.go:172] (0xc00098b340) Go away received\nI0528 21:37:51.803873 1605 log.go:172] (0xc00098b340) (0xc0009985a0) Stream removed, broadcasting: 1\nI0528 21:37:51.803886 1605 log.go:172] (0xc00098b340) (0xc00081dc20) Stream removed, broadcasting: 3\nI0528 21:37:51.803892 1605 log.go:172] (0xc00098b340) (0xc0006ba820) Stream removed, broadcasting: 5\n" May 28 21:37:51.808: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 28 21:37:51.808: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 28 21:37:51.808: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2767 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 28 21:37:52.043: INFO: stderr: "I0528 21:37:51.961861 1628 log.go:172] (0xc0009866e0) (0xc000a06000) Create stream\nI0528 21:37:51.961933 1628 log.go:172] (0xc0009866e0) (0xc000a06000) Stream added, broadcasting: 1\nI0528 21:37:51.964621 1628 log.go:172] (0xc0009866e0) Reply frame received for 1\nI0528 21:37:51.964668 1628 log.go:172] (0xc0009866e0) (0xc00067da40) Create stream\nI0528 21:37:51.964684 1628 log.go:172] (0xc0009866e0) (0xc00067da40) Stream added, broadcasting: 3\nI0528 21:37:51.965766 1628 log.go:172] (0xc0009866e0) Reply frame received for 3\nI0528 21:37:51.965816 1628 log.go:172] (0xc0009866e0) (0xc00067dc20) Create stream\nI0528 21:37:51.965831 1628 log.go:172] (0xc0009866e0) (0xc00067dc20) Stream added, broadcasting: 5\nI0528 21:37:51.966730 1628 log.go:172] (0xc0009866e0) Reply frame received for 5\nI0528 21:37:52.034422 1628 log.go:172] (0xc0009866e0) Data frame received for 3\nI0528 21:37:52.034473 1628 log.go:172] (0xc00067da40) (3) Data frame handling\nI0528 21:37:52.034522 1628 log.go:172] (0xc0009866e0) Data frame received for 5\nI0528 21:37:52.034576 1628 log.go:172] (0xc00067dc20) (5) Data frame handling\nI0528 21:37:52.034592 1628 log.go:172] (0xc00067dc20) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0528 21:37:52.034613 1628 log.go:172] (0xc00067da40) (3) Data frame sent\nI0528 21:37:52.034646 1628 log.go:172] (0xc0009866e0) Data frame received for 3\nI0528 21:37:52.034658 1628 log.go:172] (0xc00067da40) (3) Data frame handling\nI0528 21:37:52.034679 1628 log.go:172] (0xc0009866e0) Data frame received for 5\nI0528 21:37:52.034698 1628 log.go:172] (0xc00067dc20) (5) Data frame handling\nI0528 21:37:52.036491 1628 log.go:172] (0xc0009866e0) Data frame received for 1\nI0528 21:37:52.036521 1628 log.go:172] (0xc000a06000) (1) Data frame handling\nI0528 21:37:52.036542 1628 log.go:172] (0xc000a06000) (1) Data frame sent\nI0528 21:37:52.036564 1628 log.go:172] (0xc0009866e0) (0xc000a06000) Stream removed, broadcasting: 1\nI0528 21:37:52.036646 1628 log.go:172] (0xc0009866e0) Go away received\nI0528 21:37:52.037072 1628 log.go:172] (0xc0009866e0) (0xc000a06000) Stream removed, broadcasting: 1\nI0528 21:37:52.037094 1628 log.go:172] (0xc0009866e0) (0xc00067da40) Stream removed, broadcasting: 3\nI0528 21:37:52.037106 1628 log.go:172] (0xc0009866e0) (0xc00067dc20) Stream removed, broadcasting: 5\n" May 28 21:37:52.043: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 28 21:37:52.043: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 28 21:37:52.047: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 28 21:37:52.047: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 28 21:37:52.047: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 28 21:37:52.050: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2767 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 28 21:37:52.292: INFO: stderr: "I0528 21:37:52.217348 1650 log.go:172] (0xc000690a50) (0xc0005d6000) Create stream\nI0528 21:37:52.217405 1650 log.go:172] (0xc000690a50) (0xc0005d6000) Stream added, broadcasting: 1\nI0528 21:37:52.220443 1650 log.go:172] (0xc000690a50) Reply frame received for 1\nI0528 21:37:52.220489 1650 log.go:172] (0xc000690a50) (0xc0006fda40) Create stream\nI0528 21:37:52.220504 1650 log.go:172] (0xc000690a50) (0xc0006fda40) Stream added, broadcasting: 3\nI0528 21:37:52.221980 1650 log.go:172] (0xc000690a50) Reply frame received for 3\nI0528 21:37:52.222009 1650 log.go:172] (0xc000690a50) (0xc0005d6140) Create stream\nI0528 21:37:52.222020 1650 log.go:172] (0xc000690a50) (0xc0005d6140) Stream added, broadcasting: 5\nI0528 21:37:52.222971 1650 log.go:172] (0xc000690a50) Reply frame received for 5\nI0528 21:37:52.284285 1650 log.go:172] (0xc000690a50) Data frame received for 5\nI0528 21:37:52.284324 1650 log.go:172] (0xc0005d6140) (5) Data frame handling\nI0528 21:37:52.284348 1650 log.go:172] (0xc0005d6140) (5) Data frame sent\nI0528 21:37:52.284359 1650 log.go:172] (0xc000690a50) Data frame received for 5\nI0528 21:37:52.284369 1650 log.go:172] (0xc0005d6140) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0528 21:37:52.284398 1650 log.go:172] (0xc000690a50) Data frame received for 3\nI0528 21:37:52.284415 1650 log.go:172] (0xc0006fda40) (3) Data frame handling\nI0528 21:37:52.284446 1650 log.go:172] (0xc0006fda40) (3) Data frame sent\nI0528 21:37:52.284462 1650 log.go:172] (0xc000690a50) Data frame received for 3\nI0528 21:37:52.284473 1650 log.go:172] (0xc0006fda40) (3) Data frame handling\nI0528 21:37:52.285499 1650 log.go:172] (0xc000690a50) Data frame received for 1\nI0528 21:37:52.285530 1650 log.go:172] (0xc0005d6000) (1) Data frame handling\nI0528 21:37:52.285555 1650 log.go:172] (0xc0005d6000) (1) Data frame sent\nI0528 21:37:52.285598 1650 log.go:172] (0xc000690a50) (0xc0005d6000) Stream removed, broadcasting: 1\nI0528 21:37:52.285629 1650 log.go:172] (0xc000690a50) Go away received\nI0528 21:37:52.285942 1650 log.go:172] (0xc000690a50) (0xc0005d6000) Stream removed, broadcasting: 1\nI0528 21:37:52.285969 1650 log.go:172] (0xc000690a50) (0xc0006fda40) Stream removed, broadcasting: 3\nI0528 21:37:52.285978 1650 log.go:172] (0xc000690a50) (0xc0005d6140) Stream removed, broadcasting: 5\n" May 28 21:37:52.292: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 28 21:37:52.292: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 28 21:37:52.292: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2767 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 28 21:37:52.514: INFO: stderr: "I0528 21:37:52.417078 1671 log.go:172] (0xc00096e790) (0xc000a0e140) Create stream\nI0528 21:37:52.417304 1671 log.go:172] (0xc00096e790) (0xc000a0e140) Stream added, broadcasting: 1\nI0528 21:37:52.420066 1671 log.go:172] (0xc00096e790) Reply frame received for 1\nI0528 21:37:52.420122 1671 log.go:172] (0xc00096e790) (0xc000207540) Create stream\nI0528 21:37:52.420148 1671 log.go:172] (0xc00096e790) (0xc000207540) Stream added, broadcasting: 3\nI0528 21:37:52.421925 1671 log.go:172] (0xc00096e790) Reply frame received for 3\nI0528 21:37:52.421964 1671 log.go:172] (0xc00096e790) (0xc00069db80) Create stream\nI0528 21:37:52.421978 1671 log.go:172] (0xc00096e790) (0xc00069db80) Stream added, broadcasting: 5\nI0528 21:37:52.423147 1671 log.go:172] (0xc00096e790) Reply frame received for 5\nI0528 21:37:52.482469 1671 log.go:172] (0xc00096e790) Data frame received for 5\nI0528 21:37:52.482490 1671 log.go:172] (0xc00069db80) (5) Data frame handling\nI0528 21:37:52.482503 1671 log.go:172] (0xc00069db80) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0528 21:37:52.507132 1671 log.go:172] (0xc00096e790) Data frame received for 3\nI0528 21:37:52.507151 1671 log.go:172] (0xc000207540) (3) Data frame handling\nI0528 21:37:52.507161 1671 log.go:172] (0xc000207540) (3) Data frame sent\nI0528 21:37:52.507168 1671 log.go:172] (0xc00096e790) Data frame received for 3\nI0528 21:37:52.507173 1671 log.go:172] (0xc000207540) (3) Data frame handling\nI0528 21:37:52.507212 1671 log.go:172] (0xc00096e790) Data frame received for 5\nI0528 21:37:52.507249 1671 log.go:172] (0xc00069db80) (5) Data frame handling\nI0528 21:37:52.508970 1671 log.go:172] (0xc00096e790) Data frame received for 1\nI0528 21:37:52.509006 1671 log.go:172] (0xc000a0e140) (1) Data frame handling\nI0528 21:37:52.509036 1671 log.go:172] (0xc000a0e140) (1) Data frame sent\nI0528 21:37:52.509061 1671 log.go:172] (0xc00096e790) (0xc000a0e140) Stream removed, broadcasting: 1\nI0528 21:37:52.509090 1671 log.go:172] (0xc00096e790) Go away received\nI0528 21:37:52.509889 1671 log.go:172] (0xc00096e790) (0xc000a0e140) Stream removed, broadcasting: 1\nI0528 21:37:52.509926 1671 log.go:172] (0xc00096e790) (0xc000207540) Stream removed, broadcasting: 3\nI0528 21:37:52.509951 1671 log.go:172] (0xc00096e790) (0xc00069db80) Stream removed, broadcasting: 5\n" May 28 21:37:52.514: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 28 21:37:52.514: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 28 21:37:52.514: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2767 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 28 21:37:52.760: INFO: stderr: "I0528 21:37:52.656782 1692 log.go:172] (0xc0009f4790) (0xc000626000) Create stream\nI0528 21:37:52.656843 1692 log.go:172] (0xc0009f4790) (0xc000626000) Stream added, broadcasting: 1\nI0528 21:37:52.660002 1692 log.go:172] (0xc0009f4790) Reply frame received for 1\nI0528 21:37:52.660046 1692 log.go:172] (0xc0009f4790) (0xc000626140) Create stream\nI0528 21:37:52.660063 1692 log.go:172] (0xc0009f4790) (0xc000626140) Stream added, broadcasting: 3\nI0528 21:37:52.661395 1692 log.go:172] (0xc0009f4790) Reply frame received for 3\nI0528 21:37:52.661439 1692 log.go:172] (0xc0009f4790) (0xc0005d3ae0) Create stream\nI0528 21:37:52.661453 1692 log.go:172] (0xc0009f4790) (0xc0005d3ae0) Stream added, broadcasting: 5\nI0528 21:37:52.662295 1692 log.go:172] (0xc0009f4790) Reply frame received for 5\nI0528 21:37:52.722256 1692 log.go:172] (0xc0009f4790) Data frame received for 5\nI0528 21:37:52.722279 1692 log.go:172] (0xc0005d3ae0) (5) Data frame handling\nI0528 21:37:52.722292 1692 log.go:172] (0xc0005d3ae0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0528 21:37:52.752831 1692 log.go:172] (0xc0009f4790) Data frame received for 5\nI0528 21:37:52.752875 1692 log.go:172] (0xc0009f4790) Data frame received for 3\nI0528 21:37:52.752927 1692 log.go:172] (0xc000626140) (3) Data frame handling\nI0528 21:37:52.753060 1692 log.go:172] (0xc000626140) (3) Data frame sent\nI0528 21:37:52.753329 1692 log.go:172] (0xc0009f4790) Data frame received for 3\nI0528 21:37:52.753353 1692 log.go:172] (0xc000626140) (3) Data frame handling\nI0528 21:37:52.753382 1692 log.go:172] (0xc0005d3ae0) (5) Data frame handling\nI0528 21:37:52.754813 1692 log.go:172] (0xc0009f4790) Data frame received for 1\nI0528 21:37:52.754825 1692 log.go:172] (0xc000626000) (1) Data frame handling\nI0528 21:37:52.754840 1692 log.go:172] (0xc000626000) (1) Data frame sent\nI0528 21:37:52.754855 1692 log.go:172] (0xc0009f4790) (0xc000626000) Stream removed, broadcasting: 1\nI0528 21:37:52.755105 1692 log.go:172] (0xc0009f4790) (0xc000626000) Stream removed, broadcasting: 1\nI0528 21:37:52.755118 1692 log.go:172] (0xc0009f4790) (0xc000626140) Stream removed, broadcasting: 3\nI0528 21:37:52.755241 1692 log.go:172] (0xc0009f4790) (0xc0005d3ae0) Stream removed, broadcasting: 5\nI0528 21:37:52.755329 1692 log.go:172] (0xc0009f4790) Go away received\n" May 28 21:37:52.761: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 28 21:37:52.761: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 28 21:37:52.761: INFO: Waiting for statefulset status.replicas updated to 0 May 28 21:37:52.779: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 3 May 28 21:38:02.787: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 28 21:38:02.787: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 28 21:38:02.787: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 28 21:38:02.826: INFO: POD NODE PHASE GRACE CONDITIONS May 28 21:38:02.826: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:20 +0000 UTC }] May 28 21:38:02.826: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC }] May 28 21:38:02.826: INFO: ss-2 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC }] May 28 21:38:02.826: INFO: May 28 21:38:02.827: INFO: StatefulSet ss has not reached scale 0, at 3 May 28 21:38:03.869: INFO: POD NODE PHASE GRACE CONDITIONS May 28 21:38:03.869: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:20 +0000 UTC }] May 28 21:38:03.869: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC }] May 28 21:38:03.869: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC }] May 28 21:38:03.869: INFO: May 28 21:38:03.869: INFO: StatefulSet ss has not reached scale 0, at 3 May 28 21:38:04.873: INFO: POD NODE PHASE GRACE CONDITIONS May 28 21:38:04.873: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:20 +0000 UTC }] May 28 21:38:04.873: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC }] May 28 21:38:04.873: INFO: ss-2 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC }] May 28 21:38:04.873: INFO: May 28 21:38:04.873: INFO: StatefulSet ss has not reached scale 0, at 3 May 28 21:38:05.877: INFO: POD NODE PHASE GRACE CONDITIONS May 28 21:38:05.877: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:20 +0000 UTC }] May 28 21:38:05.878: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC }] May 28 21:38:05.878: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC }] May 28 21:38:05.878: INFO: May 28 21:38:05.878: INFO: StatefulSet ss has not reached scale 0, at 3 May 28 21:38:06.882: INFO: POD NODE PHASE GRACE CONDITIONS May 28 21:38:06.882: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:20 +0000 UTC }] May 28 21:38:06.883: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC }] May 28 21:38:06.883: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC }] May 28 21:38:06.883: INFO: May 28 21:38:06.883: INFO: StatefulSet ss has not reached scale 0, at 3 May 28 21:38:07.888: INFO: POD NODE PHASE GRACE CONDITIONS May 28 21:38:07.888: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:20 +0000 UTC }] May 28 21:38:07.888: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC }] May 28 21:38:07.888: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC }] May 28 21:38:07.888: INFO: May 28 21:38:07.888: INFO: StatefulSet ss has not reached scale 0, at 3 May 28 21:38:08.893: INFO: POD NODE PHASE GRACE CONDITIONS May 28 21:38:08.894: INFO: ss-0 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:20 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:20 +0000 UTC }] May 28 21:38:08.894: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:52 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC }] May 28 21:38:08.894: INFO: ss-2 jerma-worker Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:53 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-28 21:37:41 +0000 UTC }] May 28 21:38:08.894: INFO: May 28 21:38:08.894: INFO: StatefulSet ss has not reached scale 0, at 3 May 28 21:38:09.897: INFO: Verifying statefulset ss doesn't scale past 0 for another 2.900302862s May 28 21:38:10.901: INFO: Verifying statefulset ss doesn't scale past 0 for another 1.896468835s May 28 21:38:11.906: INFO: Verifying statefulset ss doesn't scale past 0 for another 892.659525ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2767 May 28 21:38:12.909: INFO: Scaling statefulset ss to 0 May 28 21:38:12.916: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 28 21:38:12.919: INFO: Deleting all statefulset in ns statefulset-2767 May 28 21:38:12.922: INFO: Scaling statefulset ss to 0 May 28 21:38:12.929: INFO: Waiting for statefulset status.replicas updated to 0 May 28 21:38:12.931: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:38:12.955: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2767" for this suite. • [SLOW TEST:52.238 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":97,"skipped":1598,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:38:12.962: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9735.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9735.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9735.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9735.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9735.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9735.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9735.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9735.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9735.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9735.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9735.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 9.46.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.46.9_udp@PTR;check="$$(dig +tcp +noall +answer +search 9.46.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.46.9_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9735.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9735.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9735.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9735.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9735.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9735.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9735.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9735.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9735.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9735.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9735.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 9.46.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.46.9_udp@PTR;check="$$(dig +tcp +noall +answer +search 9.46.96.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.96.46.9_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 28 21:38:19.152: INFO: Unable to read wheezy_udp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:19.156: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:19.160: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:19.163: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:19.186: INFO: Unable to read jessie_udp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:19.190: INFO: Unable to read jessie_tcp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:19.193: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:19.196: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:19.221: INFO: Lookups using dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1 failed for: [wheezy_udp@dns-test-service.dns-9735.svc.cluster.local wheezy_tcp@dns-test-service.dns-9735.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local jessie_udp@dns-test-service.dns-9735.svc.cluster.local jessie_tcp@dns-test-service.dns-9735.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local] May 28 21:38:24.226: INFO: Unable to read wheezy_udp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:24.230: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:24.234: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:24.236: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:24.260: INFO: Unable to read jessie_udp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:24.264: INFO: Unable to read jessie_tcp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:24.267: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:24.270: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:24.289: INFO: Lookups using dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1 failed for: [wheezy_udp@dns-test-service.dns-9735.svc.cluster.local wheezy_tcp@dns-test-service.dns-9735.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local jessie_udp@dns-test-service.dns-9735.svc.cluster.local jessie_tcp@dns-test-service.dns-9735.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local] May 28 21:38:29.226: INFO: Unable to read wheezy_udp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:29.230: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:29.232: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:29.235: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:29.263: INFO: Unable to read jessie_udp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:29.266: INFO: Unable to read jessie_tcp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:29.268: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:29.270: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:29.284: INFO: Lookups using dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1 failed for: [wheezy_udp@dns-test-service.dns-9735.svc.cluster.local wheezy_tcp@dns-test-service.dns-9735.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local jessie_udp@dns-test-service.dns-9735.svc.cluster.local jessie_tcp@dns-test-service.dns-9735.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local] May 28 21:38:34.226: INFO: Unable to read wheezy_udp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:34.229: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:34.232: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:34.235: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:34.256: INFO: Unable to read jessie_udp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:34.262: INFO: Unable to read jessie_tcp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:34.266: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:34.268: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:34.283: INFO: Lookups using dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1 failed for: [wheezy_udp@dns-test-service.dns-9735.svc.cluster.local wheezy_tcp@dns-test-service.dns-9735.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local jessie_udp@dns-test-service.dns-9735.svc.cluster.local jessie_tcp@dns-test-service.dns-9735.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local] May 28 21:38:39.225: INFO: Unable to read wheezy_udp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:39.227: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:39.229: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:39.231: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:39.244: INFO: Unable to read jessie_udp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:39.247: INFO: Unable to read jessie_tcp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:39.249: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:39.251: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:39.270: INFO: Lookups using dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1 failed for: [wheezy_udp@dns-test-service.dns-9735.svc.cluster.local wheezy_tcp@dns-test-service.dns-9735.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local jessie_udp@dns-test-service.dns-9735.svc.cluster.local jessie_tcp@dns-test-service.dns-9735.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local] May 28 21:38:44.225: INFO: Unable to read wheezy_udp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:44.228: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:44.231: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:44.234: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:44.256: INFO: Unable to read jessie_udp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:44.259: INFO: Unable to read jessie_tcp@dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:44.261: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:44.263: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local from pod dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1: the server could not find the requested resource (get pods dns-test-3f650eab-d410-4218-987f-9877efed70d1) May 28 21:38:44.321: INFO: Lookups using dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1 failed for: [wheezy_udp@dns-test-service.dns-9735.svc.cluster.local wheezy_tcp@dns-test-service.dns-9735.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local jessie_udp@dns-test-service.dns-9735.svc.cluster.local jessie_tcp@dns-test-service.dns-9735.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9735.svc.cluster.local] May 28 21:38:49.277: INFO: DNS probes using dns-9735/dns-test-3f650eab-d410-4218-987f-9877efed70d1 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:38:49.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9735" for this suite. • [SLOW TEST:36.621 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":98,"skipped":1610,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:38:49.583: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 28 21:38:50.081: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-9873 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 28 21:38:53.752: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0528 21:38:53.674167 1715 log.go:172] (0xc0004e9080) (0xc0005741e0) Create stream\nI0528 21:38:53.674227 1715 log.go:172] (0xc0004e9080) (0xc0005741e0) Stream added, broadcasting: 1\nI0528 21:38:53.677670 1715 log.go:172] (0xc0004e9080) Reply frame received for 1\nI0528 21:38:53.677741 1715 log.go:172] (0xc0004e9080) (0xc000772000) Create stream\nI0528 21:38:53.677768 1715 log.go:172] (0xc0004e9080) (0xc000772000) Stream added, broadcasting: 3\nI0528 21:38:53.678927 1715 log.go:172] (0xc0004e9080) Reply frame received for 3\nI0528 21:38:53.678968 1715 log.go:172] (0xc0004e9080) (0xc000574280) Create stream\nI0528 21:38:53.678982 1715 log.go:172] (0xc0004e9080) (0xc000574280) Stream added, broadcasting: 5\nI0528 21:38:53.680167 1715 log.go:172] (0xc0004e9080) Reply frame received for 5\nI0528 21:38:53.680275 1715 log.go:172] (0xc0004e9080) (0xc000574320) Create stream\nI0528 21:38:53.680296 1715 log.go:172] (0xc0004e9080) (0xc000574320) Stream added, broadcasting: 7\nI0528 21:38:53.681278 1715 log.go:172] (0xc0004e9080) Reply frame received for 7\nI0528 21:38:53.681450 1715 log.go:172] (0xc000772000) (3) Writing data frame\nI0528 21:38:53.681544 1715 log.go:172] (0xc000772000) (3) Writing data frame\nI0528 21:38:53.682285 1715 log.go:172] (0xc0004e9080) Data frame received for 5\nI0528 21:38:53.682300 1715 log.go:172] (0xc000574280) (5) Data frame handling\nI0528 21:38:53.682308 1715 log.go:172] (0xc000574280) (5) Data frame sent\nI0528 21:38:53.694266 1715 log.go:172] (0xc0004e9080) Data frame received for 5\nI0528 21:38:53.694294 1715 log.go:172] (0xc000574280) (5) Data frame handling\nI0528 21:38:53.694315 1715 log.go:172] (0xc000574280) (5) Data frame sent\nI0528 21:38:53.731058 1715 log.go:172] (0xc0004e9080) Data frame received for 7\nI0528 21:38:53.731087 1715 log.go:172] (0xc000574320) (7) Data frame handling\nI0528 21:38:53.731108 1715 log.go:172] (0xc0004e9080) Data frame received for 5\nI0528 21:38:53.731115 1715 log.go:172] (0xc000574280) (5) Data frame handling\nI0528 21:38:53.731407 1715 log.go:172] (0xc0004e9080) Data frame received for 1\nI0528 21:38:53.731422 1715 log.go:172] (0xc0005741e0) (1) Data frame handling\nI0528 21:38:53.731430 1715 log.go:172] (0xc0005741e0) (1) Data frame sent\nI0528 21:38:53.731440 1715 log.go:172] (0xc0004e9080) (0xc0005741e0) Stream removed, broadcasting: 1\nI0528 21:38:53.731693 1715 log.go:172] (0xc0004e9080) (0xc0005741e0) Stream removed, broadcasting: 1\nI0528 21:38:53.731719 1715 log.go:172] (0xc0004e9080) (0xc000772000) Stream removed, broadcasting: 3\nI0528 21:38:53.731738 1715 log.go:172] (0xc0004e9080) (0xc000574280) Stream removed, broadcasting: 5\nI0528 21:38:53.731834 1715 log.go:172] (0xc0004e9080) Go away received\nI0528 21:38:53.731929 1715 log.go:172] (0xc0004e9080) (0xc000574320) Stream removed, broadcasting: 7\n" May 28 21:38:53.752: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:38:55.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9873" for this suite. • [SLOW TEST:6.185 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run --rm job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1837 should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":99,"skipped":1635,"failed":0} SSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:38:55.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 21:38:56.282: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 21:38:58.293: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298736, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298736, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298736, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726298736, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 21:39:01.648: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:39:01.660: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-24-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:39:02.888: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2842" for this suite. STEP: Destroying namespace "webhook-2842-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.247 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":100,"skipped":1645,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:39:03.016: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-973ab357-4e92-4b0e-8b79-686c985a3c9d May 28 21:39:03.103: INFO: Pod name my-hostname-basic-973ab357-4e92-4b0e-8b79-686c985a3c9d: Found 0 pods out of 1 May 28 21:39:08.110: INFO: Pod name my-hostname-basic-973ab357-4e92-4b0e-8b79-686c985a3c9d: Found 1 pods out of 1 May 28 21:39:08.110: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-973ab357-4e92-4b0e-8b79-686c985a3c9d" are running May 28 21:39:08.112: INFO: Pod "my-hostname-basic-973ab357-4e92-4b0e-8b79-686c985a3c9d-bbwjq" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-28 21:39:03 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-28 21:39:06 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-28 21:39:06 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-28 21:39:03 +0000 UTC Reason: Message:}]) May 28 21:39:08.112: INFO: Trying to dial the pod May 28 21:39:13.124: INFO: Controller my-hostname-basic-973ab357-4e92-4b0e-8b79-686c985a3c9d: Got expected result from replica 1 [my-hostname-basic-973ab357-4e92-4b0e-8b79-686c985a3c9d-bbwjq]: "my-hostname-basic-973ab357-4e92-4b0e-8b79-686c985a3c9d-bbwjq", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:39:13.124: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3898" for this suite. • [SLOW TEST:10.118 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":101,"skipped":1658,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:39:13.134: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-78cf74a9-4ec0-4909-9bd2-5d29a777231c STEP: Creating a pod to test consume secrets May 28 21:39:13.292: INFO: Waiting up to 5m0s for pod "pod-secrets-dc8e8ecf-c9b7-4cab-8d19-645d0fb5b53c" in namespace "secrets-7607" to be "success or failure" May 28 21:39:13.301: INFO: Pod "pod-secrets-dc8e8ecf-c9b7-4cab-8d19-645d0fb5b53c": Phase="Pending", Reason="", readiness=false. Elapsed: 9.515756ms May 28 21:39:15.306: INFO: Pod "pod-secrets-dc8e8ecf-c9b7-4cab-8d19-645d0fb5b53c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014199759s May 28 21:39:17.310: INFO: Pod "pod-secrets-dc8e8ecf-c9b7-4cab-8d19-645d0fb5b53c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017941403s STEP: Saw pod success May 28 21:39:17.310: INFO: Pod "pod-secrets-dc8e8ecf-c9b7-4cab-8d19-645d0fb5b53c" satisfied condition "success or failure" May 28 21:39:17.312: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-dc8e8ecf-c9b7-4cab-8d19-645d0fb5b53c container secret-volume-test: STEP: delete the pod May 28 21:39:17.350: INFO: Waiting for pod pod-secrets-dc8e8ecf-c9b7-4cab-8d19-645d0fb5b53c to disappear May 28 21:39:17.354: INFO: Pod pod-secrets-dc8e8ecf-c9b7-4cab-8d19-645d0fb5b53c no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:39:17.354: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-7607" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1665,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:39:17.361: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 28 21:39:25.516: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 28 21:39:25.540: INFO: Pod pod-with-poststart-exec-hook still exists May 28 21:39:27.540: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 28 21:39:27.543: INFO: Pod pod-with-poststart-exec-hook still exists May 28 21:39:29.540: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 28 21:39:29.544: INFO: Pod pod-with-poststart-exec-hook still exists May 28 21:39:31.540: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 28 21:39:31.544: INFO: Pod pod-with-poststart-exec-hook still exists May 28 21:39:33.540: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 28 21:39:33.544: INFO: Pod pod-with-poststart-exec-hook still exists May 28 21:39:35.540: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 28 21:39:35.545: INFO: Pod pod-with-poststart-exec-hook still exists May 28 21:39:37.540: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 28 21:39:37.543: INFO: Pod pod-with-poststart-exec-hook still exists May 28 21:39:39.540: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 28 21:39:39.544: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:39:39.544: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-8338" for this suite. • [SLOW TEST:22.211 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":103,"skipped":1696,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:39:39.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 28 21:39:39.633: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-2565' May 28 21:39:39.740: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 28 21:39:39.740: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 28 21:39:41.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-2565' May 28 21:39:42.168: INFO: stderr: "" May 28 21:39:42.168: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:39:42.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2565" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":104,"skipped":1718,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:39:42.369: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 28 21:39:47.138: INFO: Successfully updated pod "pod-update-02f95303-d5c6-4b59-b9a8-48e6a9507dad" STEP: verifying the updated pod is in kubernetes May 28 21:39:47.162: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:39:47.162: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-232" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1728,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:39:47.196: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 28 21:39:47.231: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:40:00.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-974" for this suite. • [SLOW TEST:13.330 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":106,"skipped":1740,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:40:00.526: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-449393ad-df0a-4664-8a9f-44ec04b09c13 STEP: Creating a pod to test consume secrets May 28 21:40:00.650: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-9abc8fc5-56fb-4783-bbcd-56258ee6c66e" in namespace "projected-2153" to be "success or failure" May 28 21:40:00.662: INFO: Pod "pod-projected-secrets-9abc8fc5-56fb-4783-bbcd-56258ee6c66e": Phase="Pending", Reason="", readiness=false. Elapsed: 11.869828ms May 28 21:40:02.675: INFO: Pod "pod-projected-secrets-9abc8fc5-56fb-4783-bbcd-56258ee6c66e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024400193s May 28 21:40:04.679: INFO: Pod "pod-projected-secrets-9abc8fc5-56fb-4783-bbcd-56258ee6c66e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028825351s STEP: Saw pod success May 28 21:40:04.679: INFO: Pod "pod-projected-secrets-9abc8fc5-56fb-4783-bbcd-56258ee6c66e" satisfied condition "success or failure" May 28 21:40:04.682: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-9abc8fc5-56fb-4783-bbcd-56258ee6c66e container projected-secret-volume-test: STEP: delete the pod May 28 21:40:04.700: INFO: Waiting for pod pod-projected-secrets-9abc8fc5-56fb-4783-bbcd-56258ee6c66e to disappear May 28 21:40:04.718: INFO: Pod pod-projected-secrets-9abc8fc5-56fb-4783-bbcd-56258ee6c66e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:40:04.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2153" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1741,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:40:04.726: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:40:11.019: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-658" for this suite. STEP: Destroying namespace "nsdeletetest-4791" for this suite. May 28 21:40:11.045: INFO: Namespace nsdeletetest-4791 was already deleted STEP: Destroying namespace "nsdeletetest-5787" for this suite. • [SLOW TEST:6.323 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":108,"skipped":1762,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:40:11.050: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:40:11.190: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 15.144905ms) May 28 21:40:11.194: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.41528ms) May 28 21:40:11.196: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.459469ms) May 28 21:40:11.199: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.737432ms) May 28 21:40:11.202: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.068742ms) May 28 21:40:11.207: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.923193ms) May 28 21:40:11.211: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.94052ms) May 28 21:40:11.214: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.774789ms) May 28 21:40:11.216: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.137064ms) May 28 21:40:11.219: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.739132ms) May 28 21:40:11.221: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.187606ms) May 28 21:40:11.224: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.291434ms) May 28 21:40:11.226: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.106562ms) May 28 21:40:11.228: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 1.933361ms) May 28 21:40:11.230: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.064486ms) May 28 21:40:11.232: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.145916ms) May 28 21:40:11.235: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.952378ms) May 28 21:40:11.237: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.577513ms) May 28 21:40:11.240: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.47075ms) May 28 21:40:11.243: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 2.8189ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:40:11.243: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-664" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":109,"skipped":1817,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:40:11.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:40:11.388: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"5d715487-8ccb-48c0-9593-1d5dc2c3a6d5", Controller:(*bool)(0xc0028f681a), BlockOwnerDeletion:(*bool)(0xc0028f681b)}} May 28 21:40:11.445: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"600e667a-79fc-4bc4-bd87-139ed04cde92", Controller:(*bool)(0xc0028b13f2), BlockOwnerDeletion:(*bool)(0xc0028b13f3)}} May 28 21:40:11.459: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"275335e1-397e-4103-8670-940b023e2b09", Controller:(*bool)(0xc002963612), BlockOwnerDeletion:(*bool)(0xc002963613)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:40:16.529: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2706" for this suite. • [SLOW TEST:5.288 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":110,"skipped":1824,"failed":0} SSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:40:16.538: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:40:47.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-4794" for this suite. STEP: Destroying namespace "nsdeletetest-8035" for this suite. May 28 21:40:47.854: INFO: Namespace nsdeletetest-8035 was already deleted STEP: Destroying namespace "nsdeletetest-5031" for this suite. • [SLOW TEST:31.320 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":111,"skipped":1827,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:40:47.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-61aaf3db-2cfa-43bf-96b2-1ebe9142a4bd STEP: Creating a pod to test consume secrets May 28 21:40:47.923: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-869f62f1-e7aa-4d68-8adc-91c18ef85fd7" in namespace "projected-4252" to be "success or failure" May 28 21:40:47.978: INFO: Pod "pod-projected-secrets-869f62f1-e7aa-4d68-8adc-91c18ef85fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 55.708319ms May 28 21:40:49.983: INFO: Pod "pod-projected-secrets-869f62f1-e7aa-4d68-8adc-91c18ef85fd7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060524176s May 28 21:40:51.988: INFO: Pod "pod-projected-secrets-869f62f1-e7aa-4d68-8adc-91c18ef85fd7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065361945s STEP: Saw pod success May 28 21:40:51.988: INFO: Pod "pod-projected-secrets-869f62f1-e7aa-4d68-8adc-91c18ef85fd7" satisfied condition "success or failure" May 28 21:40:51.991: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-869f62f1-e7aa-4d68-8adc-91c18ef85fd7 container secret-volume-test: STEP: delete the pod May 28 21:40:52.029: INFO: Waiting for pod pod-projected-secrets-869f62f1-e7aa-4d68-8adc-91c18ef85fd7 to disappear May 28 21:40:52.062: INFO: Pod pod-projected-secrets-869f62f1-e7aa-4d68-8adc-91c18ef85fd7 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:40:52.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4252" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":112,"skipped":1854,"failed":0} SSSSSSS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:40:52.072: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 28 21:40:52.140: INFO: Waiting up to 5m0s for pod "downward-api-99df52f0-bc18-4ddc-8e55-1c4d15cbfec9" in namespace "downward-api-9531" to be "success or failure" May 28 21:40:52.143: INFO: Pod "downward-api-99df52f0-bc18-4ddc-8e55-1c4d15cbfec9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.122352ms May 28 21:40:54.176: INFO: Pod "downward-api-99df52f0-bc18-4ddc-8e55-1c4d15cbfec9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.035826233s May 28 21:40:56.180: INFO: Pod "downward-api-99df52f0-bc18-4ddc-8e55-1c4d15cbfec9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04010815s STEP: Saw pod success May 28 21:40:56.180: INFO: Pod "downward-api-99df52f0-bc18-4ddc-8e55-1c4d15cbfec9" satisfied condition "success or failure" May 28 21:40:56.224: INFO: Trying to get logs from node jerma-worker pod downward-api-99df52f0-bc18-4ddc-8e55-1c4d15cbfec9 container dapi-container: STEP: delete the pod May 28 21:40:56.262: INFO: Waiting for pod downward-api-99df52f0-bc18-4ddc-8e55-1c4d15cbfec9 to disappear May 28 21:40:56.274: INFO: Pod downward-api-99df52f0-bc18-4ddc-8e55-1c4d15cbfec9 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:40:56.275: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9531" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":113,"skipped":1861,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:40:56.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 28 21:40:56.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9583' May 28 21:40:56.746: INFO: stderr: "" May 28 21:40:56.746: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 28 21:40:56.746: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9583' May 28 21:40:56.851: INFO: stderr: "" May 28 21:40:56.851: INFO: stdout: "update-demo-nautilus-5t24w update-demo-nautilus-j6rg4 " May 28 21:40:56.851: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5t24w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9583' May 28 21:40:56.939: INFO: stderr: "" May 28 21:40:56.939: INFO: stdout: "" May 28 21:40:56.939: INFO: update-demo-nautilus-5t24w is created but not running May 28 21:41:01.939: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-9583' May 28 21:41:02.038: INFO: stderr: "" May 28 21:41:02.038: INFO: stdout: "update-demo-nautilus-5t24w update-demo-nautilus-j6rg4 " May 28 21:41:02.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5t24w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9583' May 28 21:41:02.121: INFO: stderr: "" May 28 21:41:02.121: INFO: stdout: "true" May 28 21:41:02.121: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-5t24w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9583' May 28 21:41:02.207: INFO: stderr: "" May 28 21:41:02.207: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 28 21:41:02.207: INFO: validating pod update-demo-nautilus-5t24w May 28 21:41:02.260: INFO: got data: { "image": "nautilus.jpg" } May 28 21:41:02.260: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 28 21:41:02.260: INFO: update-demo-nautilus-5t24w is verified up and running May 28 21:41:02.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j6rg4 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-9583' May 28 21:41:02.358: INFO: stderr: "" May 28 21:41:02.358: INFO: stdout: "true" May 28 21:41:02.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-j6rg4 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-9583' May 28 21:41:02.443: INFO: stderr: "" May 28 21:41:02.443: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 28 21:41:02.443: INFO: validating pod update-demo-nautilus-j6rg4 May 28 21:41:02.457: INFO: got data: { "image": "nautilus.jpg" } May 28 21:41:02.457: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 28 21:41:02.457: INFO: update-demo-nautilus-j6rg4 is verified up and running STEP: using delete to clean up resources May 28 21:41:02.457: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-9583' May 28 21:41:02.556: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 28 21:41:02.556: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 28 21:41:02.556: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9583' May 28 21:41:02.661: INFO: stderr: "No resources found in kubectl-9583 namespace.\n" May 28 21:41:02.661: INFO: stdout: "" May 28 21:41:02.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9583 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 28 21:41:02.758: INFO: stderr: "" May 28 21:41:02.758: INFO: stdout: "update-demo-nautilus-5t24w\nupdate-demo-nautilus-j6rg4\n" May 28 21:41:03.259: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-9583' May 28 21:41:03.382: INFO: stderr: "No resources found in kubectl-9583 namespace.\n" May 28 21:41:03.382: INFO: stdout: "" May 28 21:41:03.383: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-9583 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 28 21:41:03.479: INFO: stderr: "" May 28 21:41:03.479: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:41:03.479: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9583" for this suite. • [SLOW TEST:7.312 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":114,"skipped":1876,"failed":0} [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:41:03.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:41:08.294: INFO: Waiting up to 5m0s for pod "client-envvars-867b2206-c43e-4db3-9418-21298ddcfd75" in namespace "pods-7682" to be "success or failure" May 28 21:41:08.338: INFO: Pod "client-envvars-867b2206-c43e-4db3-9418-21298ddcfd75": Phase="Pending", Reason="", readiness=false. Elapsed: 43.687952ms May 28 21:41:10.470: INFO: Pod "client-envvars-867b2206-c43e-4db3-9418-21298ddcfd75": Phase="Pending", Reason="", readiness=false. Elapsed: 2.176581998s May 28 21:41:12.475: INFO: Pod "client-envvars-867b2206-c43e-4db3-9418-21298ddcfd75": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.180945233s STEP: Saw pod success May 28 21:41:12.475: INFO: Pod "client-envvars-867b2206-c43e-4db3-9418-21298ddcfd75" satisfied condition "success or failure" May 28 21:41:12.478: INFO: Trying to get logs from node jerma-worker pod client-envvars-867b2206-c43e-4db3-9418-21298ddcfd75 container env3cont: STEP: delete the pod May 28 21:41:12.499: INFO: Waiting for pod client-envvars-867b2206-c43e-4db3-9418-21298ddcfd75 to disappear May 28 21:41:12.503: INFO: Pod client-envvars-867b2206-c43e-4db3-9418-21298ddcfd75 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:41:12.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7682" for this suite. • [SLOW TEST:8.917 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":115,"skipped":1876,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:41:12.511: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:41:12.686: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 28 21:41:12.694: INFO: Number of nodes with available pods: 0 May 28 21:41:12.694: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 28 21:41:12.760: INFO: Number of nodes with available pods: 0 May 28 21:41:12.760: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:13.775: INFO: Number of nodes with available pods: 0 May 28 21:41:13.775: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:14.883: INFO: Number of nodes with available pods: 0 May 28 21:41:14.883: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:15.763: INFO: Number of nodes with available pods: 0 May 28 21:41:15.763: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:16.764: INFO: Number of nodes with available pods: 1 May 28 21:41:16.764: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 28 21:41:16.823: INFO: Number of nodes with available pods: 1 May 28 21:41:16.823: INFO: Number of running nodes: 0, number of available pods: 1 May 28 21:41:17.954: INFO: Number of nodes with available pods: 0 May 28 21:41:17.954: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 28 21:41:18.303: INFO: Number of nodes with available pods: 0 May 28 21:41:18.303: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:19.307: INFO: Number of nodes with available pods: 0 May 28 21:41:19.307: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:20.308: INFO: Number of nodes with available pods: 0 May 28 21:41:20.308: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:21.308: INFO: Number of nodes with available pods: 0 May 28 21:41:21.308: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:22.308: INFO: Number of nodes with available pods: 0 May 28 21:41:22.308: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:23.308: INFO: Number of nodes with available pods: 0 May 28 21:41:23.308: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:24.307: INFO: Number of nodes with available pods: 0 May 28 21:41:24.307: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:25.307: INFO: Number of nodes with available pods: 0 May 28 21:41:25.307: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:26.308: INFO: Number of nodes with available pods: 0 May 28 21:41:26.308: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:27.307: INFO: Number of nodes with available pods: 0 May 28 21:41:27.307: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:28.308: INFO: Number of nodes with available pods: 0 May 28 21:41:28.308: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:29.307: INFO: Number of nodes with available pods: 0 May 28 21:41:29.307: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:30.308: INFO: Number of nodes with available pods: 0 May 28 21:41:30.308: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:31.331: INFO: Number of nodes with available pods: 0 May 28 21:41:31.331: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:32.308: INFO: Number of nodes with available pods: 0 May 28 21:41:32.308: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:41:33.321: INFO: Number of nodes with available pods: 1 May 28 21:41:33.321: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5435, will wait for the garbage collector to delete the pods May 28 21:41:33.386: INFO: Deleting DaemonSet.extensions daemon-set took: 6.580565ms May 28 21:41:33.786: INFO: Terminating DaemonSet.extensions daemon-set pods took: 400.294303ms May 28 21:41:49.590: INFO: Number of nodes with available pods: 0 May 28 21:41:49.590: INFO: Number of running nodes: 0, number of available pods: 0 May 28 21:41:49.593: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5435/daemonsets","resourceVersion":"19903706"},"items":null} May 28 21:41:49.596: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5435/pods","resourceVersion":"19903706"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:41:49.610: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5435" for this suite. • [SLOW TEST:37.156 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":116,"skipped":1882,"failed":0} SSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:41:49.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5792 STEP: creating a selector STEP: Creating the service pods in kubernetes May 28 21:41:49.719: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 28 21:42:11.850: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.78:8080/dial?request=hostname&protocol=udp&host=10.244.1.34&port=8081&tries=1'] Namespace:pod-network-test-5792 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 21:42:11.850: INFO: >>> kubeConfig: /root/.kube/config I0528 21:42:11.887796 6 log.go:172] (0xc001510420) (0xc000f29b80) Create stream I0528 21:42:11.887838 6 log.go:172] (0xc001510420) (0xc000f29b80) Stream added, broadcasting: 1 I0528 21:42:11.891088 6 log.go:172] (0xc001510420) Reply frame received for 1 I0528 21:42:11.891157 6 log.go:172] (0xc001510420) (0xc001498140) Create stream I0528 21:42:11.891177 6 log.go:172] (0xc001510420) (0xc001498140) Stream added, broadcasting: 3 I0528 21:42:11.892416 6 log.go:172] (0xc001510420) Reply frame received for 3 I0528 21:42:11.892511 6 log.go:172] (0xc001510420) (0xc0029b2000) Create stream I0528 21:42:11.892555 6 log.go:172] (0xc001510420) (0xc0029b2000) Stream added, broadcasting: 5 I0528 21:42:11.894179 6 log.go:172] (0xc001510420) Reply frame received for 5 I0528 21:42:12.063231 6 log.go:172] (0xc001510420) Data frame received for 3 I0528 21:42:12.063266 6 log.go:172] (0xc001498140) (3) Data frame handling I0528 21:42:12.063383 6 log.go:172] (0xc001498140) (3) Data frame sent I0528 21:42:12.063801 6 log.go:172] (0xc001510420) Data frame received for 3 I0528 21:42:12.063833 6 log.go:172] (0xc001498140) (3) Data frame handling I0528 21:42:12.064144 6 log.go:172] (0xc001510420) Data frame received for 5 I0528 21:42:12.064168 6 log.go:172] (0xc0029b2000) (5) Data frame handling I0528 21:42:12.066464 6 log.go:172] (0xc001510420) Data frame received for 1 I0528 21:42:12.066497 6 log.go:172] (0xc000f29b80) (1) Data frame handling I0528 21:42:12.066513 6 log.go:172] (0xc000f29b80) (1) Data frame sent I0528 21:42:12.066732 6 log.go:172] (0xc001510420) (0xc000f29b80) Stream removed, broadcasting: 1 I0528 21:42:12.066821 6 log.go:172] (0xc001510420) Go away received I0528 21:42:12.066847 6 log.go:172] (0xc001510420) (0xc000f29b80) Stream removed, broadcasting: 1 I0528 21:42:12.066857 6 log.go:172] (0xc001510420) (0xc001498140) Stream removed, broadcasting: 3 I0528 21:42:12.066877 6 log.go:172] (0xc001510420) (0xc0029b2000) Stream removed, broadcasting: 5 May 28 21:42:12.066: INFO: Waiting for responses: map[] May 28 21:42:12.070: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.78:8080/dial?request=hostname&protocol=udp&host=10.244.2.77&port=8081&tries=1'] Namespace:pod-network-test-5792 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 21:42:12.070: INFO: >>> kubeConfig: /root/.kube/config I0528 21:42:12.099570 6 log.go:172] (0xc0009762c0) (0xc0014986e0) Create stream I0528 21:42:12.099597 6 log.go:172] (0xc0009762c0) (0xc0014986e0) Stream added, broadcasting: 1 I0528 21:42:12.101673 6 log.go:172] (0xc0009762c0) Reply frame received for 1 I0528 21:42:12.101710 6 log.go:172] (0xc0009762c0) (0xc0023d43c0) Create stream I0528 21:42:12.101720 6 log.go:172] (0xc0009762c0) (0xc0023d43c0) Stream added, broadcasting: 3 I0528 21:42:12.102733 6 log.go:172] (0xc0009762c0) Reply frame received for 3 I0528 21:42:12.102782 6 log.go:172] (0xc0009762c0) (0xc0016560a0) Create stream I0528 21:42:12.102801 6 log.go:172] (0xc0009762c0) (0xc0016560a0) Stream added, broadcasting: 5 I0528 21:42:12.103448 6 log.go:172] (0xc0009762c0) Reply frame received for 5 I0528 21:42:12.168898 6 log.go:172] (0xc0009762c0) Data frame received for 3 I0528 21:42:12.168931 6 log.go:172] (0xc0023d43c0) (3) Data frame handling I0528 21:42:12.168992 6 log.go:172] (0xc0023d43c0) (3) Data frame sent I0528 21:42:12.169695 6 log.go:172] (0xc0009762c0) Data frame received for 5 I0528 21:42:12.169750 6 log.go:172] (0xc0016560a0) (5) Data frame handling I0528 21:42:12.169777 6 log.go:172] (0xc0009762c0) Data frame received for 3 I0528 21:42:12.169793 6 log.go:172] (0xc0023d43c0) (3) Data frame handling I0528 21:42:12.171331 6 log.go:172] (0xc0009762c0) Data frame received for 1 I0528 21:42:12.171355 6 log.go:172] (0xc0014986e0) (1) Data frame handling I0528 21:42:12.171383 6 log.go:172] (0xc0014986e0) (1) Data frame sent I0528 21:42:12.171399 6 log.go:172] (0xc0009762c0) (0xc0014986e0) Stream removed, broadcasting: 1 I0528 21:42:12.171482 6 log.go:172] (0xc0009762c0) Go away received I0528 21:42:12.171508 6 log.go:172] (0xc0009762c0) (0xc0014986e0) Stream removed, broadcasting: 1 I0528 21:42:12.171536 6 log.go:172] (0xc0009762c0) (0xc0023d43c0) Stream removed, broadcasting: 3 I0528 21:42:12.171555 6 log.go:172] (0xc0009762c0) (0xc0016560a0) Stream removed, broadcasting: 5 May 28 21:42:12.171: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:42:12.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5792" for this suite. • [SLOW TEST:22.512 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1887,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:42:12.181: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5054 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-5054 I0528 21:42:12.339273 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5054, replica count: 2 I0528 21:42:15.389933 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0528 21:42:18.390136 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 28 21:42:18.390: INFO: Creating new exec pod May 28 21:42:23.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5054 execpodjrdzz -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 28 21:42:23.737: INFO: stderr: "I0528 21:42:23.651132 2058 log.go:172] (0xc0005f5080) (0xc0005821e0) Create stream\nI0528 21:42:23.651184 2058 log.go:172] (0xc0005f5080) (0xc0005821e0) Stream added, broadcasting: 1\nI0528 21:42:23.653744 2058 log.go:172] (0xc0005f5080) Reply frame received for 1\nI0528 21:42:23.653783 2058 log.go:172] (0xc0005f5080) (0xc000582280) Create stream\nI0528 21:42:23.653794 2058 log.go:172] (0xc0005f5080) (0xc000582280) Stream added, broadcasting: 3\nI0528 21:42:23.654791 2058 log.go:172] (0xc0005f5080) Reply frame received for 3\nI0528 21:42:23.654833 2058 log.go:172] (0xc0005f5080) (0xc000582320) Create stream\nI0528 21:42:23.654848 2058 log.go:172] (0xc0005f5080) (0xc000582320) Stream added, broadcasting: 5\nI0528 21:42:23.656194 2058 log.go:172] (0xc0005f5080) Reply frame received for 5\nI0528 21:42:23.730306 2058 log.go:172] (0xc0005f5080) Data frame received for 5\nI0528 21:42:23.730329 2058 log.go:172] (0xc000582320) (5) Data frame handling\nI0528 21:42:23.730344 2058 log.go:172] (0xc000582320) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0528 21:42:23.730827 2058 log.go:172] (0xc0005f5080) Data frame received for 5\nI0528 21:42:23.730857 2058 log.go:172] (0xc000582320) (5) Data frame handling\nI0528 21:42:23.730887 2058 log.go:172] (0xc000582320) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0528 21:42:23.730987 2058 log.go:172] (0xc0005f5080) Data frame received for 5\nI0528 21:42:23.731002 2058 log.go:172] (0xc000582320) (5) Data frame handling\nI0528 21:42:23.731191 2058 log.go:172] (0xc0005f5080) Data frame received for 3\nI0528 21:42:23.731208 2058 log.go:172] (0xc000582280) (3) Data frame handling\nI0528 21:42:23.732893 2058 log.go:172] (0xc0005f5080) Data frame received for 1\nI0528 21:42:23.732917 2058 log.go:172] (0xc0005821e0) (1) Data frame handling\nI0528 21:42:23.732938 2058 log.go:172] (0xc0005821e0) (1) Data frame sent\nI0528 21:42:23.732955 2058 log.go:172] (0xc0005f5080) (0xc0005821e0) Stream removed, broadcasting: 1\nI0528 21:42:23.732980 2058 log.go:172] (0xc0005f5080) Go away received\nI0528 21:42:23.733456 2058 log.go:172] (0xc0005f5080) (0xc0005821e0) Stream removed, broadcasting: 1\nI0528 21:42:23.733477 2058 log.go:172] (0xc0005f5080) (0xc000582280) Stream removed, broadcasting: 3\nI0528 21:42:23.733487 2058 log.go:172] (0xc0005f5080) (0xc000582320) Stream removed, broadcasting: 5\n" May 28 21:42:23.737: INFO: stdout: "" May 28 21:42:23.738: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5054 execpodjrdzz -- /bin/sh -x -c nc -zv -t -w 2 10.103.101.221 80' May 28 21:42:23.938: INFO: stderr: "I0528 21:42:23.854398 2080 log.go:172] (0xc0000f5550) (0xc0006a41e0) Create stream\nI0528 21:42:23.854455 2080 log.go:172] (0xc0000f5550) (0xc0006a41e0) Stream added, broadcasting: 1\nI0528 21:42:23.856480 2080 log.go:172] (0xc0000f5550) Reply frame received for 1\nI0528 21:42:23.856540 2080 log.go:172] (0xc0000f5550) (0xc000679ae0) Create stream\nI0528 21:42:23.856557 2080 log.go:172] (0xc0000f5550) (0xc000679ae0) Stream added, broadcasting: 3\nI0528 21:42:23.857571 2080 log.go:172] (0xc0000f5550) Reply frame received for 3\nI0528 21:42:23.857610 2080 log.go:172] (0xc0000f5550) (0xc0005c9400) Create stream\nI0528 21:42:23.857630 2080 log.go:172] (0xc0000f5550) (0xc0005c9400) Stream added, broadcasting: 5\nI0528 21:42:23.858373 2080 log.go:172] (0xc0000f5550) Reply frame received for 5\nI0528 21:42:23.931520 2080 log.go:172] (0xc0000f5550) Data frame received for 5\nI0528 21:42:23.931550 2080 log.go:172] (0xc0005c9400) (5) Data frame handling\nI0528 21:42:23.931561 2080 log.go:172] (0xc0005c9400) (5) Data frame sent\nI0528 21:42:23.931568 2080 log.go:172] (0xc0000f5550) Data frame received for 5\nI0528 21:42:23.931574 2080 log.go:172] (0xc0005c9400) (5) Data frame handling\n+ nc -zv -t -w 2 10.103.101.221 80\nConnection to 10.103.101.221 80 port [tcp/http] succeeded!\nI0528 21:42:23.931593 2080 log.go:172] (0xc0000f5550) Data frame received for 3\nI0528 21:42:23.931600 2080 log.go:172] (0xc000679ae0) (3) Data frame handling\nI0528 21:42:23.932784 2080 log.go:172] (0xc0000f5550) Data frame received for 1\nI0528 21:42:23.932798 2080 log.go:172] (0xc0006a41e0) (1) Data frame handling\nI0528 21:42:23.932809 2080 log.go:172] (0xc0006a41e0) (1) Data frame sent\nI0528 21:42:23.932960 2080 log.go:172] (0xc0000f5550) (0xc0006a41e0) Stream removed, broadcasting: 1\nI0528 21:42:23.932980 2080 log.go:172] (0xc0000f5550) Go away received\nI0528 21:42:23.933441 2080 log.go:172] (0xc0000f5550) (0xc0006a41e0) Stream removed, broadcasting: 1\nI0528 21:42:23.933459 2080 log.go:172] (0xc0000f5550) (0xc000679ae0) Stream removed, broadcasting: 3\nI0528 21:42:23.933467 2080 log.go:172] (0xc0000f5550) (0xc0005c9400) Stream removed, broadcasting: 5\n" May 28 21:42:23.938: INFO: stdout: "" May 28 21:42:23.938: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5054 execpodjrdzz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30712' May 28 21:42:24.134: INFO: stderr: "I0528 21:42:24.058092 2103 log.go:172] (0xc0003c2fd0) (0xc000afa000) Create stream\nI0528 21:42:24.058143 2103 log.go:172] (0xc0003c2fd0) (0xc000afa000) Stream added, broadcasting: 1\nI0528 21:42:24.060461 2103 log.go:172] (0xc0003c2fd0) Reply frame received for 1\nI0528 21:42:24.060509 2103 log.go:172] (0xc0003c2fd0) (0xc0005d7a40) Create stream\nI0528 21:42:24.060520 2103 log.go:172] (0xc0003c2fd0) (0xc0005d7a40) Stream added, broadcasting: 3\nI0528 21:42:24.061487 2103 log.go:172] (0xc0003c2fd0) Reply frame received for 3\nI0528 21:42:24.061527 2103 log.go:172] (0xc0003c2fd0) (0xc00052c000) Create stream\nI0528 21:42:24.061541 2103 log.go:172] (0xc0003c2fd0) (0xc00052c000) Stream added, broadcasting: 5\nI0528 21:42:24.062415 2103 log.go:172] (0xc0003c2fd0) Reply frame received for 5\nI0528 21:42:24.125331 2103 log.go:172] (0xc0003c2fd0) Data frame received for 5\nI0528 21:42:24.125356 2103 log.go:172] (0xc00052c000) (5) Data frame handling\nI0528 21:42:24.125368 2103 log.go:172] (0xc00052c000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 30712\nI0528 21:42:24.125813 2103 log.go:172] (0xc0003c2fd0) Data frame received for 5\nI0528 21:42:24.125827 2103 log.go:172] (0xc00052c000) (5) Data frame handling\nI0528 21:42:24.125837 2103 log.go:172] (0xc00052c000) (5) Data frame sent\nConnection to 172.17.0.10 30712 port [tcp/30712] succeeded!\nI0528 21:42:24.126337 2103 log.go:172] (0xc0003c2fd0) Data frame received for 5\nI0528 21:42:24.126368 2103 log.go:172] (0xc00052c000) (5) Data frame handling\nI0528 21:42:24.126471 2103 log.go:172] (0xc0003c2fd0) Data frame received for 3\nI0528 21:42:24.126487 2103 log.go:172] (0xc0005d7a40) (3) Data frame handling\nI0528 21:42:24.127929 2103 log.go:172] (0xc0003c2fd0) Data frame received for 1\nI0528 21:42:24.127947 2103 log.go:172] (0xc000afa000) (1) Data frame handling\nI0528 21:42:24.127955 2103 log.go:172] (0xc000afa000) (1) Data frame sent\nI0528 21:42:24.127963 2103 log.go:172] (0xc0003c2fd0) (0xc000afa000) Stream removed, broadcasting: 1\nI0528 21:42:24.127970 2103 log.go:172] (0xc0003c2fd0) Go away received\nI0528 21:42:24.128453 2103 log.go:172] (0xc0003c2fd0) (0xc000afa000) Stream removed, broadcasting: 1\nI0528 21:42:24.128469 2103 log.go:172] (0xc0003c2fd0) (0xc0005d7a40) Stream removed, broadcasting: 3\nI0528 21:42:24.128477 2103 log.go:172] (0xc0003c2fd0) (0xc00052c000) Stream removed, broadcasting: 5\n" May 28 21:42:24.134: INFO: stdout: "" May 28 21:42:24.134: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5054 execpodjrdzz -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30712' May 28 21:42:24.329: INFO: stderr: "I0528 21:42:24.254677 2124 log.go:172] (0xc000a6a0b0) (0xc0001ed4a0) Create stream\nI0528 21:42:24.254743 2124 log.go:172] (0xc000a6a0b0) (0xc0001ed4a0) Stream added, broadcasting: 1\nI0528 21:42:24.259200 2124 log.go:172] (0xc000a6a0b0) Reply frame received for 1\nI0528 21:42:24.259246 2124 log.go:172] (0xc000a6a0b0) (0xc000b74000) Create stream\nI0528 21:42:24.259259 2124 log.go:172] (0xc000a6a0b0) (0xc000b74000) Stream added, broadcasting: 3\nI0528 21:42:24.260307 2124 log.go:172] (0xc000a6a0b0) Reply frame received for 3\nI0528 21:42:24.260344 2124 log.go:172] (0xc000a6a0b0) (0xc00097c000) Create stream\nI0528 21:42:24.260355 2124 log.go:172] (0xc000a6a0b0) (0xc00097c000) Stream added, broadcasting: 5\nI0528 21:42:24.261767 2124 log.go:172] (0xc000a6a0b0) Reply frame received for 5\nI0528 21:42:24.319898 2124 log.go:172] (0xc000a6a0b0) Data frame received for 5\nI0528 21:42:24.319939 2124 log.go:172] (0xc00097c000) (5) Data frame handling\nI0528 21:42:24.319965 2124 log.go:172] (0xc00097c000) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.8 30712\nI0528 21:42:24.320162 2124 log.go:172] (0xc000a6a0b0) Data frame received for 5\nI0528 21:42:24.320204 2124 log.go:172] (0xc00097c000) (5) Data frame handling\nI0528 21:42:24.320237 2124 log.go:172] (0xc00097c000) (5) Data frame sent\nConnection to 172.17.0.8 30712 port [tcp/30712] succeeded!\nI0528 21:42:24.320552 2124 log.go:172] (0xc000a6a0b0) Data frame received for 3\nI0528 21:42:24.320585 2124 log.go:172] (0xc000b74000) (3) Data frame handling\nI0528 21:42:24.320685 2124 log.go:172] (0xc000a6a0b0) Data frame received for 5\nI0528 21:42:24.320701 2124 log.go:172] (0xc00097c000) (5) Data frame handling\nI0528 21:42:24.322535 2124 log.go:172] (0xc000a6a0b0) Data frame received for 1\nI0528 21:42:24.322570 2124 log.go:172] (0xc0001ed4a0) (1) Data frame handling\nI0528 21:42:24.322597 2124 log.go:172] (0xc0001ed4a0) (1) Data frame sent\nI0528 21:42:24.322697 2124 log.go:172] (0xc000a6a0b0) (0xc0001ed4a0) Stream removed, broadcasting: 1\nI0528 21:42:24.322939 2124 log.go:172] (0xc000a6a0b0) Go away received\nI0528 21:42:24.323101 2124 log.go:172] (0xc000a6a0b0) (0xc0001ed4a0) Stream removed, broadcasting: 1\nI0528 21:42:24.323134 2124 log.go:172] (0xc000a6a0b0) (0xc000b74000) Stream removed, broadcasting: 3\nI0528 21:42:24.323156 2124 log.go:172] (0xc000a6a0b0) (0xc00097c000) Stream removed, broadcasting: 5\n" May 28 21:42:24.329: INFO: stdout: "" May 28 21:42:24.329: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:42:24.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5054" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.214 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":118,"skipped":1925,"failed":0} [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:42:24.396: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:42:41.497: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4604" for this suite. • [SLOW TEST:17.110 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":119,"skipped":1925,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:42:41.510: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-b0337bea-42fa-4379-a26e-5493f1e4c4ff in namespace container-probe-5765 May 28 21:42:45.595: INFO: Started pod busybox-b0337bea-42fa-4379-a26e-5493f1e4c4ff in namespace container-probe-5765 STEP: checking the pod's current state and verifying that restartCount is present May 28 21:42:45.599: INFO: Initial restart count of pod busybox-b0337bea-42fa-4379-a26e-5493f1e4c4ff is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:46:46.241: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5765" for this suite. • [SLOW TEST:244.759 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":1976,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:46:46.269: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-43aa4973-847e-4e65-a47f-96f49a3cbf68 STEP: Creating a pod to test consume secrets May 28 21:46:46.357: INFO: Waiting up to 5m0s for pod "pod-secrets-2af2c985-c68c-4f84-8136-553e4609b4fa" in namespace "secrets-5751" to be "success or failure" May 28 21:46:46.359: INFO: Pod "pod-secrets-2af2c985-c68c-4f84-8136-553e4609b4fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.203036ms May 28 21:46:48.364: INFO: Pod "pod-secrets-2af2c985-c68c-4f84-8136-553e4609b4fa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.006822878s May 28 21:46:50.369: INFO: Pod "pod-secrets-2af2c985-c68c-4f84-8136-553e4609b4fa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012029619s STEP: Saw pod success May 28 21:46:50.369: INFO: Pod "pod-secrets-2af2c985-c68c-4f84-8136-553e4609b4fa" satisfied condition "success or failure" May 28 21:46:50.372: INFO: Trying to get logs from node jerma-worker pod pod-secrets-2af2c985-c68c-4f84-8136-553e4609b4fa container secret-env-test: STEP: delete the pod May 28 21:46:50.404: INFO: Waiting for pod pod-secrets-2af2c985-c68c-4f84-8136-553e4609b4fa to disappear May 28 21:46:50.426: INFO: Pod pod-secrets-2af2c985-c68c-4f84-8136-553e4609b4fa no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:46:50.427: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5751" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":1987,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:46:50.435: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0528 21:47:21.109900 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 28 21:47:21.109: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:47:21.110: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1670" for this suite. • [SLOW TEST:30.682 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":122,"skipped":1998,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:47:21.117: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:47:51.140: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5692" for this suite. • [SLOW TEST:30.033 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2035,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:47:51.151: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:47:51.307: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 28 21:47:56.315: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 28 21:47:56.315: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 28 21:47:58.319: INFO: Creating deployment "test-rollover-deployment" May 28 21:47:58.356: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 28 21:48:00.390: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 28 21:48:00.395: INFO: Ensure that both replica sets have 1 created replica May 28 21:48:00.401: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 28 21:48:00.407: INFO: Updating deployment test-rollover-deployment May 28 21:48:00.407: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 28 21:48:02.416: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 28 21:48:02.422: INFO: Make sure deployment "test-rollover-deployment" is complete May 28 21:48:02.426: INFO: all replica sets need to contain the pod-template-hash label May 28 21:48:02.426: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299280, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 21:48:04.434: INFO: all replica sets need to contain the pod-template-hash label May 28 21:48:04.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299283, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 21:48:06.435: INFO: all replica sets need to contain the pod-template-hash label May 28 21:48:06.435: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299283, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 21:48:08.434: INFO: all replica sets need to contain the pod-template-hash label May 28 21:48:08.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299283, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 21:48:10.434: INFO: all replica sets need to contain the pod-template-hash label May 28 21:48:10.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299283, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 21:48:12.434: INFO: all replica sets need to contain the pod-template-hash label May 28 21:48:12.434: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299283, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299278, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 21:48:14.434: INFO: May 28 21:48:14.434: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 28 21:48:14.443: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-6820 /apis/apps/v1/namespaces/deployment-6820/deployments/test-rollover-deployment 20be85db-de9b-4cf0-a6b0-81395d408586 19905254 2 2020-05-28 21:47:58 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00411d578 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-28 21:47:58 +0000 UTC,LastTransitionTime:2020-05-28 21:47:58 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-28 21:48:14 +0000 UTC,LastTransitionTime:2020-05-28 21:47:58 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 28 21:48:14.447: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-6820 /apis/apps/v1/namespaces/deployment-6820/replicasets/test-rollover-deployment-574d6dfbff 4d459820-4276-4bf6-9220-7862ec6d1b26 19905242 2 2020-05-28 21:48:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 20be85db-de9b-4cf0-a6b0-81395d408586 0xc0035c3fd7 0xc0035c3fd8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035a2058 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 28 21:48:14.447: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 28 21:48:14.447: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-6820 /apis/apps/v1/namespaces/deployment-6820/replicasets/test-rollover-controller d222b91c-0560-4e1c-8f8e-f65d9a8dda2b 19905252 2 2020-05-28 21:47:51 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 20be85db-de9b-4cf0-a6b0-81395d408586 0xc0035c3ee7 0xc0035c3ee8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc0035c3f48 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 28 21:48:14.447: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-6820 /apis/apps/v1/namespaces/deployment-6820/replicasets/test-rollover-deployment-f6c94f66c 4c33139b-3ae0-43ac-818d-c675c9986f73 19905190 2 2020-05-28 21:47:58 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 20be85db-de9b-4cf0-a6b0-81395d408586 0xc0035a20c0 0xc0035a20c1}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0035a2138 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 28 21:48:14.450: INFO: Pod "test-rollover-deployment-574d6dfbff-qdvgx" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-qdvgx test-rollover-deployment-574d6dfbff- deployment-6820 /api/v1/namespaces/deployment-6820/pods/test-rollover-deployment-574d6dfbff-qdvgx 5dbc9f5f-03c4-468e-84f7-b4946f6d3539 19905210 0 2020-05-28 21:48:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff 4d459820-4276-4bf6-9220-7862ec6d1b26 0xc0035a26b7 0xc0035a26b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-7vwrf,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-7vwrf,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-7vwrf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 21:48:00 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 21:48:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 21:48:03 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 21:48:00 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.44,StartTime:2020-05-28 21:48:00 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-28 21:48:03 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://9e73db30f162ce2e9a5d25d5964f2b817d044408188208a60921da5fad9bb7e1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.44,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:48:14.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6820" for this suite. • [SLOW TEST:23.305 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":124,"skipped":2055,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:48:14.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 28 21:48:14.600: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:48:21.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-1927" for this suite. • [SLOW TEST:6.706 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":125,"skipped":2067,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:48:21.163: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-0e9918c9-35de-482f-a55e-e801dd119f2a STEP: Creating a pod to test consume configMaps May 28 21:48:21.440: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-f71cae6b-4796-4ae7-a072-dbbcb8f243b5" in namespace "projected-9989" to be "success or failure" May 28 21:48:21.524: INFO: Pod "pod-projected-configmaps-f71cae6b-4796-4ae7-a072-dbbcb8f243b5": Phase="Pending", Reason="", readiness=false. Elapsed: 84.105371ms May 28 21:48:23.528: INFO: Pod "pod-projected-configmaps-f71cae6b-4796-4ae7-a072-dbbcb8f243b5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087854625s May 28 21:48:25.532: INFO: Pod "pod-projected-configmaps-f71cae6b-4796-4ae7-a072-dbbcb8f243b5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092027227s STEP: Saw pod success May 28 21:48:25.532: INFO: Pod "pod-projected-configmaps-f71cae6b-4796-4ae7-a072-dbbcb8f243b5" satisfied condition "success or failure" May 28 21:48:25.535: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-f71cae6b-4796-4ae7-a072-dbbcb8f243b5 container projected-configmap-volume-test: STEP: delete the pod May 28 21:48:25.576: INFO: Waiting for pod pod-projected-configmaps-f71cae6b-4796-4ae7-a072-dbbcb8f243b5 to disappear May 28 21:48:25.587: INFO: Pod pod-projected-configmaps-f71cae6b-4796-4ae7-a072-dbbcb8f243b5 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:48:25.587: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9989" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":126,"skipped":2100,"failed":0} S ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:48:25.594: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 28 21:48:29.839: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:48:29.868: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-447" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2101,"failed":0} SSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:48:29.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1689 STEP: creating a selector STEP: Creating the service pods in kubernetes May 28 21:48:30.035: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 28 21:48:58.174: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.47 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1689 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 21:48:58.174: INFO: >>> kubeConfig: /root/.kube/config I0528 21:48:58.225828 6 log.go:172] (0xc001510370) (0xc000c12e60) Create stream I0528 21:48:58.225868 6 log.go:172] (0xc001510370) (0xc000c12e60) Stream added, broadcasting: 1 I0528 21:48:58.227673 6 log.go:172] (0xc001510370) Reply frame received for 1 I0528 21:48:58.227720 6 log.go:172] (0xc001510370) (0xc000c12fa0) Create stream I0528 21:48:58.227743 6 log.go:172] (0xc001510370) (0xc000c12fa0) Stream added, broadcasting: 3 I0528 21:48:58.229079 6 log.go:172] (0xc001510370) Reply frame received for 3 I0528 21:48:58.229276 6 log.go:172] (0xc001510370) (0xc001b063c0) Create stream I0528 21:48:58.229294 6 log.go:172] (0xc001510370) (0xc001b063c0) Stream added, broadcasting: 5 I0528 21:48:58.230078 6 log.go:172] (0xc001510370) Reply frame received for 5 I0528 21:48:59.297483 6 log.go:172] (0xc001510370) Data frame received for 5 I0528 21:48:59.297531 6 log.go:172] (0xc001b063c0) (5) Data frame handling I0528 21:48:59.297579 6 log.go:172] (0xc001510370) Data frame received for 3 I0528 21:48:59.297606 6 log.go:172] (0xc000c12fa0) (3) Data frame handling I0528 21:48:59.297675 6 log.go:172] (0xc000c12fa0) (3) Data frame sent I0528 21:48:59.297698 6 log.go:172] (0xc001510370) Data frame received for 3 I0528 21:48:59.297709 6 log.go:172] (0xc000c12fa0) (3) Data frame handling I0528 21:48:59.299806 6 log.go:172] (0xc001510370) Data frame received for 1 I0528 21:48:59.299845 6 log.go:172] (0xc000c12e60) (1) Data frame handling I0528 21:48:59.299871 6 log.go:172] (0xc000c12e60) (1) Data frame sent I0528 21:48:59.299896 6 log.go:172] (0xc001510370) (0xc000c12e60) Stream removed, broadcasting: 1 I0528 21:48:59.299917 6 log.go:172] (0xc001510370) Go away received I0528 21:48:59.300073 6 log.go:172] (0xc001510370) (0xc000c12e60) Stream removed, broadcasting: 1 I0528 21:48:59.300110 6 log.go:172] (0xc001510370) (0xc000c12fa0) Stream removed, broadcasting: 3 I0528 21:48:59.300136 6 log.go:172] (0xc001510370) (0xc001b063c0) Stream removed, broadcasting: 5 May 28 21:48:59.300: INFO: Found all expected endpoints: [netserver-0] May 28 21:48:59.304: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.84 8081 | grep -v '^\s*$'] Namespace:pod-network-test-1689 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 21:48:59.304: INFO: >>> kubeConfig: /root/.kube/config I0528 21:48:59.341439 6 log.go:172] (0xc0009762c0) (0xc0022fa500) Create stream I0528 21:48:59.341468 6 log.go:172] (0xc0009762c0) (0xc0022fa500) Stream added, broadcasting: 1 I0528 21:48:59.343230 6 log.go:172] (0xc0009762c0) Reply frame received for 1 I0528 21:48:59.343271 6 log.go:172] (0xc0009762c0) (0xc0029b3540) Create stream I0528 21:48:59.343288 6 log.go:172] (0xc0009762c0) (0xc0029b3540) Stream added, broadcasting: 3 I0528 21:48:59.344001 6 log.go:172] (0xc0009762c0) Reply frame received for 3 I0528 21:48:59.344042 6 log.go:172] (0xc0009762c0) (0xc0014ba820) Create stream I0528 21:48:59.344051 6 log.go:172] (0xc0009762c0) (0xc0014ba820) Stream added, broadcasting: 5 I0528 21:48:59.344875 6 log.go:172] (0xc0009762c0) Reply frame received for 5 I0528 21:49:00.449703 6 log.go:172] (0xc0009762c0) Data frame received for 3 I0528 21:49:00.449765 6 log.go:172] (0xc0029b3540) (3) Data frame handling I0528 21:49:00.449847 6 log.go:172] (0xc0029b3540) (3) Data frame sent I0528 21:49:00.449876 6 log.go:172] (0xc0009762c0) Data frame received for 3 I0528 21:49:00.449896 6 log.go:172] (0xc0029b3540) (3) Data frame handling I0528 21:49:00.450279 6 log.go:172] (0xc0009762c0) Data frame received for 5 I0528 21:49:00.450320 6 log.go:172] (0xc0014ba820) (5) Data frame handling I0528 21:49:00.452363 6 log.go:172] (0xc0009762c0) Data frame received for 1 I0528 21:49:00.452394 6 log.go:172] (0xc0022fa500) (1) Data frame handling I0528 21:49:00.452430 6 log.go:172] (0xc0022fa500) (1) Data frame sent I0528 21:49:00.452458 6 log.go:172] (0xc0009762c0) (0xc0022fa500) Stream removed, broadcasting: 1 I0528 21:49:00.452479 6 log.go:172] (0xc0009762c0) Go away received I0528 21:49:00.452616 6 log.go:172] (0xc0009762c0) (0xc0022fa500) Stream removed, broadcasting: 1 I0528 21:49:00.452656 6 log.go:172] (0xc0009762c0) (0xc0029b3540) Stream removed, broadcasting: 3 I0528 21:49:00.452702 6 log.go:172] (0xc0009762c0) (0xc0014ba820) Stream removed, broadcasting: 5 May 28 21:49:00.452: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:49:00.452: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1689" for this suite. • [SLOW TEST:30.561 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2105,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:49:00.462: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-465.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-465.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-465.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-465.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-465.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-465.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 28 21:49:06.804: INFO: DNS probes using dns-465/dns-test-c62e1934-101b-45b7-8f96-4929041fbe25 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:49:06.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-465" for this suite. • [SLOW TEST:6.567 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":129,"skipped":2118,"failed":0} SSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:49:07.029: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-9c8da872-b52e-47b9-8a35-4b2051277a05 STEP: Creating a pod to test consume secrets May 28 21:49:07.515: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-41eef40a-a692-4e5a-ad0f-58f59d78fbcf" in namespace "projected-3390" to be "success or failure" May 28 21:49:07.519: INFO: Pod "pod-projected-secrets-41eef40a-a692-4e5a-ad0f-58f59d78fbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.178946ms May 28 21:49:09.651: INFO: Pod "pod-projected-secrets-41eef40a-a692-4e5a-ad0f-58f59d78fbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.135186398s May 28 21:49:11.685: INFO: Pod "pod-projected-secrets-41eef40a-a692-4e5a-ad0f-58f59d78fbcf": Phase="Pending", Reason="", readiness=false. Elapsed: 4.169532401s May 28 21:49:13.689: INFO: Pod "pod-projected-secrets-41eef40a-a692-4e5a-ad0f-58f59d78fbcf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.173957231s STEP: Saw pod success May 28 21:49:13.689: INFO: Pod "pod-projected-secrets-41eef40a-a692-4e5a-ad0f-58f59d78fbcf" satisfied condition "success or failure" May 28 21:49:13.692: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-41eef40a-a692-4e5a-ad0f-58f59d78fbcf container projected-secret-volume-test: STEP: delete the pod May 28 21:49:13.713: INFO: Waiting for pod pod-projected-secrets-41eef40a-a692-4e5a-ad0f-58f59d78fbcf to disappear May 28 21:49:13.717: INFO: Pod pod-projected-secrets-41eef40a-a692-4e5a-ad0f-58f59d78fbcf no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:49:13.718: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3390" for this suite. • [SLOW TEST:6.695 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":130,"skipped":2123,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:49:13.724: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-3426a935-0402-457f-a01f-83493059d6d0 STEP: Creating a pod to test consume secrets May 28 21:49:13.846: INFO: Waiting up to 5m0s for pod "pod-secrets-d441356d-8b88-45a4-ae67-016c3719ed2d" in namespace "secrets-4513" to be "success or failure" May 28 21:49:13.850: INFO: Pod "pod-secrets-d441356d-8b88-45a4-ae67-016c3719ed2d": Phase="Pending", Reason="", readiness=false. Elapsed: 3.544019ms May 28 21:49:15.868: INFO: Pod "pod-secrets-d441356d-8b88-45a4-ae67-016c3719ed2d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021569231s May 28 21:49:17.872: INFO: Pod "pod-secrets-d441356d-8b88-45a4-ae67-016c3719ed2d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025452895s STEP: Saw pod success May 28 21:49:17.872: INFO: Pod "pod-secrets-d441356d-8b88-45a4-ae67-016c3719ed2d" satisfied condition "success or failure" May 28 21:49:17.874: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-d441356d-8b88-45a4-ae67-016c3719ed2d container secret-volume-test: STEP: delete the pod May 28 21:49:17.926: INFO: Waiting for pod pod-secrets-d441356d-8b88-45a4-ae67-016c3719ed2d to disappear May 28 21:49:17.930: INFO: Pod pod-secrets-d441356d-8b88-45a4-ae67-016c3719ed2d no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:49:17.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4513" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2136,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:49:17.936: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 28 21:49:17.992: INFO: Waiting up to 5m0s for pod "pod-e17a5ae7-e9e5-41d2-b430-c570efd01e83" in namespace "emptydir-3919" to be "success or failure" May 28 21:49:18.005: INFO: Pod "pod-e17a5ae7-e9e5-41d2-b430-c570efd01e83": Phase="Pending", Reason="", readiness=false. Elapsed: 12.924114ms May 28 21:49:20.208: INFO: Pod "pod-e17a5ae7-e9e5-41d2-b430-c570efd01e83": Phase="Pending", Reason="", readiness=false. Elapsed: 2.21545673s May 28 21:49:22.212: INFO: Pod "pod-e17a5ae7-e9e5-41d2-b430-c570efd01e83": Phase="Running", Reason="", readiness=true. Elapsed: 4.220019177s May 28 21:49:24.216: INFO: Pod "pod-e17a5ae7-e9e5-41d2-b430-c570efd01e83": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.223818571s STEP: Saw pod success May 28 21:49:24.216: INFO: Pod "pod-e17a5ae7-e9e5-41d2-b430-c570efd01e83" satisfied condition "success or failure" May 28 21:49:24.220: INFO: Trying to get logs from node jerma-worker2 pod pod-e17a5ae7-e9e5-41d2-b430-c570efd01e83 container test-container: STEP: delete the pod May 28 21:49:24.243: INFO: Waiting for pod pod-e17a5ae7-e9e5-41d2-b430-c570efd01e83 to disappear May 28 21:49:24.247: INFO: Pod pod-e17a5ae7-e9e5-41d2-b430-c570efd01e83 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:49:24.247: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3919" for this suite. • [SLOW TEST:6.356 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2143,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:49:24.293: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 28 21:49:24.352: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 28 21:49:35.706: INFO: >>> kubeConfig: /root/.kube/config May 28 21:49:38.666: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:49:49.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-263" for this suite. • [SLOW TEST:24.807 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":133,"skipped":2156,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:49:49.101: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9398 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 28 21:49:49.248: INFO: Found 0 stateful pods, waiting for 3 May 28 21:49:59.252: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 28 21:49:59.252: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 28 21:49:59.252: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 28 21:50:09.252: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 28 21:50:09.252: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 28 21:50:09.252: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 28 21:50:09.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9398 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 28 21:50:12.270: INFO: stderr: "I0528 21:50:12.165350 2147 log.go:172] (0xc0008ecbb0) (0xc00072de00) Create stream\nI0528 21:50:12.165393 2147 log.go:172] (0xc0008ecbb0) (0xc00072de00) Stream added, broadcasting: 1\nI0528 21:50:12.168015 2147 log.go:172] (0xc0008ecbb0) Reply frame received for 1\nI0528 21:50:12.168067 2147 log.go:172] (0xc0008ecbb0) (0xc0006c25a0) Create stream\nI0528 21:50:12.168082 2147 log.go:172] (0xc0008ecbb0) (0xc0006c25a0) Stream added, broadcasting: 3\nI0528 21:50:12.169307 2147 log.go:172] (0xc0008ecbb0) Reply frame received for 3\nI0528 21:50:12.169361 2147 log.go:172] (0xc0008ecbb0) (0xc00071b360) Create stream\nI0528 21:50:12.169380 2147 log.go:172] (0xc0008ecbb0) (0xc00071b360) Stream added, broadcasting: 5\nI0528 21:50:12.170476 2147 log.go:172] (0xc0008ecbb0) Reply frame received for 5\nI0528 21:50:12.246168 2147 log.go:172] (0xc0008ecbb0) Data frame received for 5\nI0528 21:50:12.246197 2147 log.go:172] (0xc00071b360) (5) Data frame handling\nI0528 21:50:12.246216 2147 log.go:172] (0xc00071b360) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0528 21:50:12.262090 2147 log.go:172] (0xc0008ecbb0) Data frame received for 3\nI0528 21:50:12.262197 2147 log.go:172] (0xc0006c25a0) (3) Data frame handling\nI0528 21:50:12.262232 2147 log.go:172] (0xc0006c25a0) (3) Data frame sent\nI0528 21:50:12.262247 2147 log.go:172] (0xc0008ecbb0) Data frame received for 3\nI0528 21:50:12.262257 2147 log.go:172] (0xc0006c25a0) (3) Data frame handling\nI0528 21:50:12.262409 2147 log.go:172] (0xc0008ecbb0) Data frame received for 5\nI0528 21:50:12.262452 2147 log.go:172] (0xc00071b360) (5) Data frame handling\nI0528 21:50:12.264462 2147 log.go:172] (0xc0008ecbb0) Data frame received for 1\nI0528 21:50:12.264501 2147 log.go:172] (0xc00072de00) (1) Data frame handling\nI0528 21:50:12.264523 2147 log.go:172] (0xc00072de00) (1) Data frame sent\nI0528 21:50:12.264549 2147 log.go:172] (0xc0008ecbb0) (0xc00072de00) Stream removed, broadcasting: 1\nI0528 21:50:12.264590 2147 log.go:172] (0xc0008ecbb0) Go away received\nI0528 21:50:12.264841 2147 log.go:172] (0xc0008ecbb0) (0xc00072de00) Stream removed, broadcasting: 1\nI0528 21:50:12.264856 2147 log.go:172] (0xc0008ecbb0) (0xc0006c25a0) Stream removed, broadcasting: 3\nI0528 21:50:12.264863 2147 log.go:172] (0xc0008ecbb0) (0xc00071b360) Stream removed, broadcasting: 5\n" May 28 21:50:12.270: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 28 21:50:12.270: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 28 21:50:22.302: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 28 21:50:32.378: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9398 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 28 21:50:32.579: INFO: stderr: "I0528 21:50:32.492134 2177 log.go:172] (0xc000a4a0b0) (0xc000643e00) Create stream\nI0528 21:50:32.492183 2177 log.go:172] (0xc000a4a0b0) (0xc000643e00) Stream added, broadcasting: 1\nI0528 21:50:32.494177 2177 log.go:172] (0xc000a4a0b0) Reply frame received for 1\nI0528 21:50:32.494215 2177 log.go:172] (0xc000a4a0b0) (0xc000710a00) Create stream\nI0528 21:50:32.494232 2177 log.go:172] (0xc000a4a0b0) (0xc000710a00) Stream added, broadcasting: 3\nI0528 21:50:32.494891 2177 log.go:172] (0xc000a4a0b0) Reply frame received for 3\nI0528 21:50:32.494927 2177 log.go:172] (0xc000a4a0b0) (0xc000710aa0) Create stream\nI0528 21:50:32.494938 2177 log.go:172] (0xc000a4a0b0) (0xc000710aa0) Stream added, broadcasting: 5\nI0528 21:50:32.495580 2177 log.go:172] (0xc000a4a0b0) Reply frame received for 5\nI0528 21:50:32.572956 2177 log.go:172] (0xc000a4a0b0) Data frame received for 3\nI0528 21:50:32.572987 2177 log.go:172] (0xc000710a00) (3) Data frame handling\nI0528 21:50:32.572999 2177 log.go:172] (0xc000710a00) (3) Data frame sent\nI0528 21:50:32.573303 2177 log.go:172] (0xc000a4a0b0) Data frame received for 3\nI0528 21:50:32.573332 2177 log.go:172] (0xc000710a00) (3) Data frame handling\nI0528 21:50:32.573353 2177 log.go:172] (0xc000a4a0b0) Data frame received for 5\nI0528 21:50:32.573386 2177 log.go:172] (0xc000710aa0) (5) Data frame handling\nI0528 21:50:32.573408 2177 log.go:172] (0xc000710aa0) (5) Data frame sent\nI0528 21:50:32.573418 2177 log.go:172] (0xc000a4a0b0) Data frame received for 5\nI0528 21:50:32.573426 2177 log.go:172] (0xc000710aa0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0528 21:50:32.574789 2177 log.go:172] (0xc000a4a0b0) Data frame received for 1\nI0528 21:50:32.574814 2177 log.go:172] (0xc000643e00) (1) Data frame handling\nI0528 21:50:32.574829 2177 log.go:172] (0xc000643e00) (1) Data frame sent\nI0528 21:50:32.574848 2177 log.go:172] (0xc000a4a0b0) (0xc000643e00) Stream removed, broadcasting: 1\nI0528 21:50:32.574866 2177 log.go:172] (0xc000a4a0b0) Go away received\nI0528 21:50:32.575286 2177 log.go:172] (0xc000a4a0b0) (0xc000643e00) Stream removed, broadcasting: 1\nI0528 21:50:32.575306 2177 log.go:172] (0xc000a4a0b0) (0xc000710a00) Stream removed, broadcasting: 3\nI0528 21:50:32.575316 2177 log.go:172] (0xc000a4a0b0) (0xc000710aa0) Stream removed, broadcasting: 5\n" May 28 21:50:32.579: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 28 21:50:32.579: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 28 21:50:42.635: INFO: Waiting for StatefulSet statefulset-9398/ss2 to complete update May 28 21:50:42.635: INFO: Waiting for Pod statefulset-9398/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 28 21:50:42.635: INFO: Waiting for Pod statefulset-9398/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 28 21:50:52.641: INFO: Waiting for StatefulSet statefulset-9398/ss2 to complete update STEP: Rolling back to a previous revision May 28 21:51:02.643: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9398 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 28 21:51:02.912: INFO: stderr: "I0528 21:51:02.767217 2197 log.go:172] (0xc000bc0fd0) (0xc000a6a460) Create stream\nI0528 21:51:02.767270 2197 log.go:172] (0xc000bc0fd0) (0xc000a6a460) Stream added, broadcasting: 1\nI0528 21:51:02.771728 2197 log.go:172] (0xc000bc0fd0) Reply frame received for 1\nI0528 21:51:02.771762 2197 log.go:172] (0xc000bc0fd0) (0xc0006b3cc0) Create stream\nI0528 21:51:02.771773 2197 log.go:172] (0xc000bc0fd0) (0xc0006b3cc0) Stream added, broadcasting: 3\nI0528 21:51:02.772755 2197 log.go:172] (0xc000bc0fd0) Reply frame received for 3\nI0528 21:51:02.772783 2197 log.go:172] (0xc000bc0fd0) (0xc0005d28c0) Create stream\nI0528 21:51:02.772792 2197 log.go:172] (0xc000bc0fd0) (0xc0005d28c0) Stream added, broadcasting: 5\nI0528 21:51:02.773616 2197 log.go:172] (0xc000bc0fd0) Reply frame received for 5\nI0528 21:51:02.866803 2197 log.go:172] (0xc000bc0fd0) Data frame received for 5\nI0528 21:51:02.866836 2197 log.go:172] (0xc0005d28c0) (5) Data frame handling\nI0528 21:51:02.866863 2197 log.go:172] (0xc0005d28c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0528 21:51:02.904412 2197 log.go:172] (0xc000bc0fd0) Data frame received for 3\nI0528 21:51:02.904456 2197 log.go:172] (0xc0006b3cc0) (3) Data frame handling\nI0528 21:51:02.904489 2197 log.go:172] (0xc0006b3cc0) (3) Data frame sent\nI0528 21:51:02.904741 2197 log.go:172] (0xc000bc0fd0) Data frame received for 5\nI0528 21:51:02.904783 2197 log.go:172] (0xc0005d28c0) (5) Data frame handling\nI0528 21:51:02.904838 2197 log.go:172] (0xc000bc0fd0) Data frame received for 3\nI0528 21:51:02.904878 2197 log.go:172] (0xc0006b3cc0) (3) Data frame handling\nI0528 21:51:02.906723 2197 log.go:172] (0xc000bc0fd0) Data frame received for 1\nI0528 21:51:02.906738 2197 log.go:172] (0xc000a6a460) (1) Data frame handling\nI0528 21:51:02.906744 2197 log.go:172] (0xc000a6a460) (1) Data frame sent\nI0528 21:51:02.906752 2197 log.go:172] (0xc000bc0fd0) (0xc000a6a460) Stream removed, broadcasting: 1\nI0528 21:51:02.906945 2197 log.go:172] (0xc000bc0fd0) Go away received\nI0528 21:51:02.907107 2197 log.go:172] (0xc000bc0fd0) (0xc000a6a460) Stream removed, broadcasting: 1\nI0528 21:51:02.907150 2197 log.go:172] (0xc000bc0fd0) (0xc0006b3cc0) Stream removed, broadcasting: 3\nI0528 21:51:02.907166 2197 log.go:172] (0xc000bc0fd0) (0xc0005d28c0) Stream removed, broadcasting: 5\n" May 28 21:51:02.912: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 28 21:51:02.912: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 28 21:51:12.944: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 28 21:51:23.017: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-9398 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 28 21:51:23.228: INFO: stderr: "I0528 21:51:23.155295 2218 log.go:172] (0xc0009ae6e0) (0xc000946140) Create stream\nI0528 21:51:23.155344 2218 log.go:172] (0xc0009ae6e0) (0xc000946140) Stream added, broadcasting: 1\nI0528 21:51:23.157334 2218 log.go:172] (0xc0009ae6e0) Reply frame received for 1\nI0528 21:51:23.157369 2218 log.go:172] (0xc0009ae6e0) (0xc000679ae0) Create stream\nI0528 21:51:23.157377 2218 log.go:172] (0xc0009ae6e0) (0xc000679ae0) Stream added, broadcasting: 3\nI0528 21:51:23.157985 2218 log.go:172] (0xc0009ae6e0) Reply frame received for 3\nI0528 21:51:23.158001 2218 log.go:172] (0xc0009ae6e0) (0xc000946280) Create stream\nI0528 21:51:23.158007 2218 log.go:172] (0xc0009ae6e0) (0xc000946280) Stream added, broadcasting: 5\nI0528 21:51:23.158591 2218 log.go:172] (0xc0009ae6e0) Reply frame received for 5\nI0528 21:51:23.222034 2218 log.go:172] (0xc0009ae6e0) Data frame received for 5\nI0528 21:51:23.222075 2218 log.go:172] (0xc000946280) (5) Data frame handling\nI0528 21:51:23.222088 2218 log.go:172] (0xc000946280) (5) Data frame sent\nI0528 21:51:23.222097 2218 log.go:172] (0xc0009ae6e0) Data frame received for 5\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0528 21:51:23.222104 2218 log.go:172] (0xc000946280) (5) Data frame handling\nI0528 21:51:23.222135 2218 log.go:172] (0xc0009ae6e0) Data frame received for 3\nI0528 21:51:23.222155 2218 log.go:172] (0xc000679ae0) (3) Data frame handling\nI0528 21:51:23.222170 2218 log.go:172] (0xc000679ae0) (3) Data frame sent\nI0528 21:51:23.222180 2218 log.go:172] (0xc0009ae6e0) Data frame received for 3\nI0528 21:51:23.222198 2218 log.go:172] (0xc000679ae0) (3) Data frame handling\nI0528 21:51:23.223223 2218 log.go:172] (0xc0009ae6e0) Data frame received for 1\nI0528 21:51:23.223235 2218 log.go:172] (0xc000946140) (1) Data frame handling\nI0528 21:51:23.223242 2218 log.go:172] (0xc000946140) (1) Data frame sent\nI0528 21:51:23.223252 2218 log.go:172] (0xc0009ae6e0) (0xc000946140) Stream removed, broadcasting: 1\nI0528 21:51:23.223341 2218 log.go:172] (0xc0009ae6e0) Go away received\nI0528 21:51:23.223524 2218 log.go:172] (0xc0009ae6e0) (0xc000946140) Stream removed, broadcasting: 1\nI0528 21:51:23.223536 2218 log.go:172] (0xc0009ae6e0) (0xc000679ae0) Stream removed, broadcasting: 3\nI0528 21:51:23.223541 2218 log.go:172] (0xc0009ae6e0) (0xc000946280) Stream removed, broadcasting: 5\n" May 28 21:51:23.228: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 28 21:51:23.228: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 28 21:51:43.273: INFO: Waiting for StatefulSet statefulset-9398/ss2 to complete update May 28 21:51:43.273: INFO: Waiting for Pod statefulset-9398/ss2-0 to have revision ss2-65c7964b94 update revision ss2-84f9d6bf57 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 28 21:51:53.280: INFO: Deleting all statefulset in ns statefulset-9398 May 28 21:51:53.284: INFO: Scaling statefulset ss2 to 0 May 28 21:52:13.318: INFO: Waiting for statefulset status.replicas updated to 0 May 28 21:52:13.319: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:52:13.332: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9398" for this suite. • [SLOW TEST:144.239 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":134,"skipped":2167,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:52:13.340: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 28 21:52:13.459: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6659 /api/v1/namespaces/watch-6659/configmaps/e2e-watch-test-resource-version 28fe6cc5-7099-411f-9f3e-8d99b4d1b603 19906582 0 2020-05-28 21:52:13 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 28 21:52:13.460: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-6659 /api/v1/namespaces/watch-6659/configmaps/e2e-watch-test-resource-version 28fe6cc5-7099-411f-9f3e-8d99b4d1b603 19906583 0 2020-05-28 21:52:13 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:52:13.460: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6659" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":135,"skipped":2176,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:52:13.467: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 28 21:52:23.555: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5955 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 21:52:23.555: INFO: >>> kubeConfig: /root/.kube/config I0528 21:52:23.593330 6 log.go:172] (0xc0049e8630) (0xc001657680) Create stream I0528 21:52:23.593371 6 log.go:172] (0xc0049e8630) (0xc001657680) Stream added, broadcasting: 1 I0528 21:52:23.595292 6 log.go:172] (0xc0049e8630) Reply frame received for 1 I0528 21:52:23.595343 6 log.go:172] (0xc0049e8630) (0xc0014981e0) Create stream I0528 21:52:23.595365 6 log.go:172] (0xc0049e8630) (0xc0014981e0) Stream added, broadcasting: 3 I0528 21:52:23.596445 6 log.go:172] (0xc0049e8630) Reply frame received for 3 I0528 21:52:23.596500 6 log.go:172] (0xc0049e8630) (0xc0022fbf40) Create stream I0528 21:52:23.596518 6 log.go:172] (0xc0049e8630) (0xc0022fbf40) Stream added, broadcasting: 5 I0528 21:52:23.598005 6 log.go:172] (0xc0049e8630) Reply frame received for 5 I0528 21:52:23.686770 6 log.go:172] (0xc0049e8630) Data frame received for 3 I0528 21:52:23.686820 6 log.go:172] (0xc0014981e0) (3) Data frame handling I0528 21:52:23.686835 6 log.go:172] (0xc0014981e0) (3) Data frame sent I0528 21:52:23.686852 6 log.go:172] (0xc0049e8630) Data frame received for 3 I0528 21:52:23.686879 6 log.go:172] (0xc0049e8630) Data frame received for 5 I0528 21:52:23.686920 6 log.go:172] (0xc0022fbf40) (5) Data frame handling I0528 21:52:23.686953 6 log.go:172] (0xc0014981e0) (3) Data frame handling I0528 21:52:23.688331 6 log.go:172] (0xc0049e8630) Data frame received for 1 I0528 21:52:23.688360 6 log.go:172] (0xc001657680) (1) Data frame handling I0528 21:52:23.688384 6 log.go:172] (0xc001657680) (1) Data frame sent I0528 21:52:23.688495 6 log.go:172] (0xc0049e8630) (0xc001657680) Stream removed, broadcasting: 1 I0528 21:52:23.688602 6 log.go:172] (0xc0049e8630) Go away received I0528 21:52:23.688652 6 log.go:172] (0xc0049e8630) (0xc001657680) Stream removed, broadcasting: 1 I0528 21:52:23.688684 6 log.go:172] (0xc0049e8630) (0xc0014981e0) Stream removed, broadcasting: 3 I0528 21:52:23.688701 6 log.go:172] (0xc0049e8630) (0xc0022fbf40) Stream removed, broadcasting: 5 May 28 21:52:23.688: INFO: Exec stderr: "" May 28 21:52:23.688: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5955 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 21:52:23.688: INFO: >>> kubeConfig: /root/.kube/config I0528 21:52:23.716129 6 log.go:172] (0xc0029ba160) (0xc001a8cf00) Create stream I0528 21:52:23.716166 6 log.go:172] (0xc0029ba160) (0xc001a8cf00) Stream added, broadcasting: 1 I0528 21:52:23.718183 6 log.go:172] (0xc0029ba160) Reply frame received for 1 I0528 21:52:23.718223 6 log.go:172] (0xc0029ba160) (0xc001a8d040) Create stream I0528 21:52:23.718237 6 log.go:172] (0xc0029ba160) (0xc001a8d040) Stream added, broadcasting: 3 I0528 21:52:23.719157 6 log.go:172] (0xc0029ba160) Reply frame received for 3 I0528 21:52:23.719202 6 log.go:172] (0xc0029ba160) (0xc0014985a0) Create stream I0528 21:52:23.719232 6 log.go:172] (0xc0029ba160) (0xc0014985a0) Stream added, broadcasting: 5 I0528 21:52:23.720226 6 log.go:172] (0xc0029ba160) Reply frame received for 5 I0528 21:52:23.782546 6 log.go:172] (0xc0029ba160) Data frame received for 5 I0528 21:52:23.782609 6 log.go:172] (0xc0029ba160) Data frame received for 3 I0528 21:52:23.782664 6 log.go:172] (0xc001a8d040) (3) Data frame handling I0528 21:52:23.782696 6 log.go:172] (0xc001a8d040) (3) Data frame sent I0528 21:52:23.782715 6 log.go:172] (0xc0029ba160) Data frame received for 3 I0528 21:52:23.782730 6 log.go:172] (0xc001a8d040) (3) Data frame handling I0528 21:52:23.782756 6 log.go:172] (0xc0014985a0) (5) Data frame handling I0528 21:52:23.784182 6 log.go:172] (0xc0029ba160) Data frame received for 1 I0528 21:52:23.784215 6 log.go:172] (0xc001a8cf00) (1) Data frame handling I0528 21:52:23.784241 6 log.go:172] (0xc001a8cf00) (1) Data frame sent I0528 21:52:23.784266 6 log.go:172] (0xc0029ba160) (0xc001a8cf00) Stream removed, broadcasting: 1 I0528 21:52:23.784296 6 log.go:172] (0xc0029ba160) Go away received I0528 21:52:23.784419 6 log.go:172] (0xc0029ba160) (0xc001a8cf00) Stream removed, broadcasting: 1 I0528 21:52:23.784442 6 log.go:172] (0xc0029ba160) (0xc001a8d040) Stream removed, broadcasting: 3 I0528 21:52:23.784458 6 log.go:172] (0xc0029ba160) (0xc0014985a0) Stream removed, broadcasting: 5 May 28 21:52:23.784: INFO: Exec stderr: "" May 28 21:52:23.784: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5955 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 21:52:23.784: INFO: >>> kubeConfig: /root/.kube/config I0528 21:52:23.823738 6 log.go:172] (0xc002208420) (0xc001a8d4a0) Create stream I0528 21:52:23.823769 6 log.go:172] (0xc002208420) (0xc001a8d4a0) Stream added, broadcasting: 1 I0528 21:52:23.825391 6 log.go:172] (0xc002208420) Reply frame received for 1 I0528 21:52:23.825411 6 log.go:172] (0xc002208420) (0xc001a8d7c0) Create stream I0528 21:52:23.825421 6 log.go:172] (0xc002208420) (0xc001a8d7c0) Stream added, broadcasting: 3 I0528 21:52:23.826041 6 log.go:172] (0xc002208420) Reply frame received for 3 I0528 21:52:23.826083 6 log.go:172] (0xc002208420) (0xc0029b2fa0) Create stream I0528 21:52:23.826098 6 log.go:172] (0xc002208420) (0xc0029b2fa0) Stream added, broadcasting: 5 I0528 21:52:23.826776 6 log.go:172] (0xc002208420) Reply frame received for 5 I0528 21:52:23.896914 6 log.go:172] (0xc002208420) Data frame received for 5 I0528 21:52:23.896939 6 log.go:172] (0xc0029b2fa0) (5) Data frame handling I0528 21:52:23.896962 6 log.go:172] (0xc002208420) Data frame received for 3 I0528 21:52:23.896996 6 log.go:172] (0xc001a8d7c0) (3) Data frame handling I0528 21:52:23.897029 6 log.go:172] (0xc001a8d7c0) (3) Data frame sent I0528 21:52:23.897409 6 log.go:172] (0xc002208420) Data frame received for 3 I0528 21:52:23.897495 6 log.go:172] (0xc001a8d7c0) (3) Data frame handling I0528 21:52:23.899192 6 log.go:172] (0xc002208420) Data frame received for 1 I0528 21:52:23.899239 6 log.go:172] (0xc001a8d4a0) (1) Data frame handling I0528 21:52:23.899274 6 log.go:172] (0xc001a8d4a0) (1) Data frame sent I0528 21:52:23.899320 6 log.go:172] (0xc002208420) (0xc001a8d4a0) Stream removed, broadcasting: 1 I0528 21:52:23.899353 6 log.go:172] (0xc002208420) Go away received I0528 21:52:23.899394 6 log.go:172] (0xc002208420) (0xc001a8d4a0) Stream removed, broadcasting: 1 I0528 21:52:23.899414 6 log.go:172] (0xc002208420) (0xc001a8d7c0) Stream removed, broadcasting: 3 I0528 21:52:23.899420 6 log.go:172] (0xc002208420) (0xc0029b2fa0) Stream removed, broadcasting: 5 May 28 21:52:23.899: INFO: Exec stderr: "" May 28 21:52:23.899: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5955 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 21:52:23.899: INFO: >>> kubeConfig: /root/.kube/config I0528 21:52:23.933936 6 log.go:172] (0xc002507ef0) (0xc0029b3220) Create stream I0528 21:52:23.933962 6 log.go:172] (0xc002507ef0) (0xc0029b3220) Stream added, broadcasting: 1 I0528 21:52:23.936127 6 log.go:172] (0xc002507ef0) Reply frame received for 1 I0528 21:52:23.936169 6 log.go:172] (0xc002507ef0) (0xc001a8d860) Create stream I0528 21:52:23.936184 6 log.go:172] (0xc002507ef0) (0xc001a8d860) Stream added, broadcasting: 3 I0528 21:52:23.937549 6 log.go:172] (0xc002507ef0) Reply frame received for 3 I0528 21:52:23.937601 6 log.go:172] (0xc002507ef0) (0xc001657720) Create stream I0528 21:52:23.937625 6 log.go:172] (0xc002507ef0) (0xc001657720) Stream added, broadcasting: 5 I0528 21:52:23.938573 6 log.go:172] (0xc002507ef0) Reply frame received for 5 I0528 21:52:23.997014 6 log.go:172] (0xc002507ef0) Data frame received for 5 I0528 21:52:23.997351 6 log.go:172] (0xc001657720) (5) Data frame handling I0528 21:52:23.997403 6 log.go:172] (0xc002507ef0) Data frame received for 3 I0528 21:52:23.997431 6 log.go:172] (0xc001a8d860) (3) Data frame handling I0528 21:52:23.997459 6 log.go:172] (0xc001a8d860) (3) Data frame sent I0528 21:52:23.997479 6 log.go:172] (0xc002507ef0) Data frame received for 3 I0528 21:52:23.997497 6 log.go:172] (0xc001a8d860) (3) Data frame handling I0528 21:52:23.999401 6 log.go:172] (0xc002507ef0) Data frame received for 1 I0528 21:52:23.999429 6 log.go:172] (0xc0029b3220) (1) Data frame handling I0528 21:52:23.999454 6 log.go:172] (0xc0029b3220) (1) Data frame sent I0528 21:52:23.999470 6 log.go:172] (0xc002507ef0) (0xc0029b3220) Stream removed, broadcasting: 1 I0528 21:52:23.999486 6 log.go:172] (0xc002507ef0) Go away received I0528 21:52:23.999602 6 log.go:172] (0xc002507ef0) (0xc0029b3220) Stream removed, broadcasting: 1 I0528 21:52:23.999617 6 log.go:172] (0xc002507ef0) (0xc001a8d860) Stream removed, broadcasting: 3 I0528 21:52:23.999624 6 log.go:172] (0xc002507ef0) (0xc001657720) Stream removed, broadcasting: 5 May 28 21:52:23.999: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 28 21:52:23.999: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5955 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 21:52:23.999: INFO: >>> kubeConfig: /root/.kube/config I0528 21:52:24.034712 6 log.go:172] (0xc0049e8c60) (0xc0016579a0) Create stream I0528 21:52:24.034757 6 log.go:172] (0xc0049e8c60) (0xc0016579a0) Stream added, broadcasting: 1 I0528 21:52:24.037672 6 log.go:172] (0xc0049e8c60) Reply frame received for 1 I0528 21:52:24.037718 6 log.go:172] (0xc0049e8c60) (0xc002288000) Create stream I0528 21:52:24.037739 6 log.go:172] (0xc0049e8c60) (0xc002288000) Stream added, broadcasting: 3 I0528 21:52:24.038829 6 log.go:172] (0xc0049e8c60) Reply frame received for 3 I0528 21:52:24.038855 6 log.go:172] (0xc0049e8c60) (0xc0029b3360) Create stream I0528 21:52:24.038864 6 log.go:172] (0xc0049e8c60) (0xc0029b3360) Stream added, broadcasting: 5 I0528 21:52:24.039743 6 log.go:172] (0xc0049e8c60) Reply frame received for 5 I0528 21:52:24.098142 6 log.go:172] (0xc0049e8c60) Data frame received for 5 I0528 21:52:24.098189 6 log.go:172] (0xc0029b3360) (5) Data frame handling I0528 21:52:24.098215 6 log.go:172] (0xc0049e8c60) Data frame received for 3 I0528 21:52:24.098228 6 log.go:172] (0xc002288000) (3) Data frame handling I0528 21:52:24.098241 6 log.go:172] (0xc002288000) (3) Data frame sent I0528 21:52:24.098253 6 log.go:172] (0xc0049e8c60) Data frame received for 3 I0528 21:52:24.098264 6 log.go:172] (0xc002288000) (3) Data frame handling I0528 21:52:24.099396 6 log.go:172] (0xc0049e8c60) Data frame received for 1 I0528 21:52:24.099410 6 log.go:172] (0xc0016579a0) (1) Data frame handling I0528 21:52:24.099420 6 log.go:172] (0xc0016579a0) (1) Data frame sent I0528 21:52:24.099439 6 log.go:172] (0xc0049e8c60) (0xc0016579a0) Stream removed, broadcasting: 1 I0528 21:52:24.099495 6 log.go:172] (0xc0049e8c60) Go away received I0528 21:52:24.099520 6 log.go:172] (0xc0049e8c60) (0xc0016579a0) Stream removed, broadcasting: 1 I0528 21:52:24.099534 6 log.go:172] (0xc0049e8c60) (0xc002288000) Stream removed, broadcasting: 3 I0528 21:52:24.099547 6 log.go:172] (0xc0049e8c60) (0xc0029b3360) Stream removed, broadcasting: 5 May 28 21:52:24.099: INFO: Exec stderr: "" May 28 21:52:24.099: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5955 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 21:52:24.099: INFO: >>> kubeConfig: /root/.kube/config I0528 21:52:24.126822 6 log.go:172] (0xc001510840) (0xc0022885a0) Create stream I0528 21:52:24.126858 6 log.go:172] (0xc001510840) (0xc0022885a0) Stream added, broadcasting: 1 I0528 21:52:24.129695 6 log.go:172] (0xc001510840) Reply frame received for 1 I0528 21:52:24.129725 6 log.go:172] (0xc001510840) (0xc0029b34a0) Create stream I0528 21:52:24.129736 6 log.go:172] (0xc001510840) (0xc0029b34a0) Stream added, broadcasting: 3 I0528 21:52:24.130662 6 log.go:172] (0xc001510840) Reply frame received for 3 I0528 21:52:24.130690 6 log.go:172] (0xc001510840) (0xc002288820) Create stream I0528 21:52:24.130697 6 log.go:172] (0xc001510840) (0xc002288820) Stream added, broadcasting: 5 I0528 21:52:24.131564 6 log.go:172] (0xc001510840) Reply frame received for 5 I0528 21:52:24.209952 6 log.go:172] (0xc001510840) Data frame received for 5 I0528 21:52:24.209988 6 log.go:172] (0xc002288820) (5) Data frame handling I0528 21:52:24.210010 6 log.go:172] (0xc001510840) Data frame received for 3 I0528 21:52:24.210023 6 log.go:172] (0xc0029b34a0) (3) Data frame handling I0528 21:52:24.210034 6 log.go:172] (0xc0029b34a0) (3) Data frame sent I0528 21:52:24.210103 6 log.go:172] (0xc001510840) Data frame received for 3 I0528 21:52:24.210120 6 log.go:172] (0xc0029b34a0) (3) Data frame handling I0528 21:52:24.211738 6 log.go:172] (0xc001510840) Data frame received for 1 I0528 21:52:24.211760 6 log.go:172] (0xc0022885a0) (1) Data frame handling I0528 21:52:24.211876 6 log.go:172] (0xc0022885a0) (1) Data frame sent I0528 21:52:24.211891 6 log.go:172] (0xc001510840) (0xc0022885a0) Stream removed, broadcasting: 1 I0528 21:52:24.211906 6 log.go:172] (0xc001510840) Go away received I0528 21:52:24.212015 6 log.go:172] (0xc001510840) (0xc0022885a0) Stream removed, broadcasting: 1 I0528 21:52:24.212051 6 log.go:172] (0xc001510840) (0xc0029b34a0) Stream removed, broadcasting: 3 I0528 21:52:24.212072 6 log.go:172] (0xc001510840) (0xc002288820) Stream removed, broadcasting: 5 May 28 21:52:24.212: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 28 21:52:24.212: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5955 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 21:52:24.212: INFO: >>> kubeConfig: /root/.kube/config I0528 21:52:24.244335 6 log.go:172] (0xc0009dc580) (0xc0029b3680) Create stream I0528 21:52:24.244363 6 log.go:172] (0xc0009dc580) (0xc0029b3680) Stream added, broadcasting: 1 I0528 21:52:24.246472 6 log.go:172] (0xc0009dc580) Reply frame received for 1 I0528 21:52:24.246503 6 log.go:172] (0xc0009dc580) (0xc001657ae0) Create stream I0528 21:52:24.246509 6 log.go:172] (0xc0009dc580) (0xc001657ae0) Stream added, broadcasting: 3 I0528 21:52:24.247831 6 log.go:172] (0xc0009dc580) Reply frame received for 3 I0528 21:52:24.247891 6 log.go:172] (0xc0009dc580) (0xc001a8dae0) Create stream I0528 21:52:24.247928 6 log.go:172] (0xc0009dc580) (0xc001a8dae0) Stream added, broadcasting: 5 I0528 21:52:24.249023 6 log.go:172] (0xc0009dc580) Reply frame received for 5 I0528 21:52:24.320042 6 log.go:172] (0xc0009dc580) Data frame received for 5 I0528 21:52:24.320150 6 log.go:172] (0xc001a8dae0) (5) Data frame handling I0528 21:52:24.320178 6 log.go:172] (0xc0009dc580) Data frame received for 3 I0528 21:52:24.320192 6 log.go:172] (0xc001657ae0) (3) Data frame handling I0528 21:52:24.320210 6 log.go:172] (0xc001657ae0) (3) Data frame sent I0528 21:52:24.320224 6 log.go:172] (0xc0009dc580) Data frame received for 3 I0528 21:52:24.320238 6 log.go:172] (0xc001657ae0) (3) Data frame handling I0528 21:52:24.322149 6 log.go:172] (0xc0009dc580) Data frame received for 1 I0528 21:52:24.322181 6 log.go:172] (0xc0029b3680) (1) Data frame handling I0528 21:52:24.322206 6 log.go:172] (0xc0029b3680) (1) Data frame sent I0528 21:52:24.322239 6 log.go:172] (0xc0009dc580) (0xc0029b3680) Stream removed, broadcasting: 1 I0528 21:52:24.322338 6 log.go:172] (0xc0009dc580) Go away received I0528 21:52:24.322370 6 log.go:172] (0xc0009dc580) (0xc0029b3680) Stream removed, broadcasting: 1 I0528 21:52:24.322400 6 log.go:172] (0xc0009dc580) (0xc001657ae0) Stream removed, broadcasting: 3 I0528 21:52:24.322421 6 log.go:172] (0xc0009dc580) (0xc001a8dae0) Stream removed, broadcasting: 5 May 28 21:52:24.322: INFO: Exec stderr: "" May 28 21:52:24.322: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5955 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 21:52:24.322: INFO: >>> kubeConfig: /root/.kube/config I0528 21:52:24.351075 6 log.go:172] (0xc0009dcbb0) (0xc0029b3860) Create stream I0528 21:52:24.351102 6 log.go:172] (0xc0009dcbb0) (0xc0029b3860) Stream added, broadcasting: 1 I0528 21:52:24.352738 6 log.go:172] (0xc0009dcbb0) Reply frame received for 1 I0528 21:52:24.352778 6 log.go:172] (0xc0009dcbb0) (0xc001a8dc20) Create stream I0528 21:52:24.352790 6 log.go:172] (0xc0009dcbb0) (0xc001a8dc20) Stream added, broadcasting: 3 I0528 21:52:24.353853 6 log.go:172] (0xc0009dcbb0) Reply frame received for 3 I0528 21:52:24.353905 6 log.go:172] (0xc0009dcbb0) (0xc0014986e0) Create stream I0528 21:52:24.353932 6 log.go:172] (0xc0009dcbb0) (0xc0014986e0) Stream added, broadcasting: 5 I0528 21:52:24.354592 6 log.go:172] (0xc0009dcbb0) Reply frame received for 5 I0528 21:52:24.406708 6 log.go:172] (0xc0009dcbb0) Data frame received for 5 I0528 21:52:24.406758 6 log.go:172] (0xc0014986e0) (5) Data frame handling I0528 21:52:24.406803 6 log.go:172] (0xc0009dcbb0) Data frame received for 3 I0528 21:52:24.406828 6 log.go:172] (0xc001a8dc20) (3) Data frame handling I0528 21:52:24.406858 6 log.go:172] (0xc001a8dc20) (3) Data frame sent I0528 21:52:24.406872 6 log.go:172] (0xc0009dcbb0) Data frame received for 3 I0528 21:52:24.406882 6 log.go:172] (0xc001a8dc20) (3) Data frame handling I0528 21:52:24.408295 6 log.go:172] (0xc0009dcbb0) Data frame received for 1 I0528 21:52:24.408308 6 log.go:172] (0xc0029b3860) (1) Data frame handling I0528 21:52:24.408321 6 log.go:172] (0xc0029b3860) (1) Data frame sent I0528 21:52:24.408334 6 log.go:172] (0xc0009dcbb0) (0xc0029b3860) Stream removed, broadcasting: 1 I0528 21:52:24.408499 6 log.go:172] (0xc0009dcbb0) (0xc0029b3860) Stream removed, broadcasting: 1 I0528 21:52:24.408543 6 log.go:172] (0xc0009dcbb0) Go away received I0528 21:52:24.408597 6 log.go:172] (0xc0009dcbb0) (0xc001a8dc20) Stream removed, broadcasting: 3 I0528 21:52:24.408640 6 log.go:172] (0xc0009dcbb0) (0xc0014986e0) Stream removed, broadcasting: 5 May 28 21:52:24.408: INFO: Exec stderr: "" May 28 21:52:24.408: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-5955 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 21:52:24.408: INFO: >>> kubeConfig: /root/.kube/config I0528 21:52:24.439855 6 log.go:172] (0xc0020204d0) (0xc001498aa0) Create stream I0528 21:52:24.439884 6 log.go:172] (0xc0020204d0) (0xc001498aa0) Stream added, broadcasting: 1 I0528 21:52:24.441761 6 log.go:172] (0xc0020204d0) Reply frame received for 1 I0528 21:52:24.441804 6 log.go:172] (0xc0020204d0) (0xc001498b40) Create stream I0528 21:52:24.441818 6 log.go:172] (0xc0020204d0) (0xc001498b40) Stream added, broadcasting: 3 I0528 21:52:24.442519 6 log.go:172] (0xc0020204d0) Reply frame received for 3 I0528 21:52:24.442587 6 log.go:172] (0xc0020204d0) (0xc001657c20) Create stream I0528 21:52:24.442706 6 log.go:172] (0xc0020204d0) (0xc001657c20) Stream added, broadcasting: 5 I0528 21:52:24.443457 6 log.go:172] (0xc0020204d0) Reply frame received for 5 I0528 21:52:24.506159 6 log.go:172] (0xc0020204d0) Data frame received for 5 I0528 21:52:24.506194 6 log.go:172] (0xc001657c20) (5) Data frame handling I0528 21:52:24.506222 6 log.go:172] (0xc0020204d0) Data frame received for 3 I0528 21:52:24.506236 6 log.go:172] (0xc001498b40) (3) Data frame handling I0528 21:52:24.506250 6 log.go:172] (0xc001498b40) (3) Data frame sent I0528 21:52:24.506262 6 log.go:172] (0xc0020204d0) Data frame received for 3 I0528 21:52:24.506274 6 log.go:172] (0xc001498b40) (3) Data frame handling I0528 21:52:24.507769 6 log.go:172] (0xc0020204d0) Data frame received for 1 I0528 21:52:24.507793 6 log.go:172] (0xc001498aa0) (1) Data frame handling I0528 21:52:24.507812 6 log.go:172] (0xc001498aa0) (1) Data frame sent I0528 21:52:24.507834 6 log.go:172] (0xc0020204d0) (0xc001498aa0) Stream removed, broadcasting: 1 I0528 21:52:24.507895 6 log.go:172] (0xc0020204d0) Go away received I0528 21:52:24.507939 6 log.go:172] (0xc0020204d0) (0xc001498aa0) Stream removed, broadcasting: 1 I0528 21:52:24.507959 6 log.go:172] (0xc0020204d0) (0xc001498b40) Stream removed, broadcasting: 3 I0528 21:52:24.507975 6 log.go:172] (0xc0020204d0) (0xc001657c20) Stream removed, broadcasting: 5 May 28 21:52:24.507: INFO: Exec stderr: "" May 28 21:52:24.508: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-5955 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 21:52:24.508: INFO: >>> kubeConfig: /root/.kube/config I0528 21:52:24.539116 6 log.go:172] (0xc0049e91e0) (0xc0016ea0a0) Create stream I0528 21:52:24.539143 6 log.go:172] (0xc0049e91e0) (0xc0016ea0a0) Stream added, broadcasting: 1 I0528 21:52:24.541024 6 log.go:172] (0xc0049e91e0) Reply frame received for 1 I0528 21:52:24.541082 6 log.go:172] (0xc0049e91e0) (0xc001498e60) Create stream I0528 21:52:24.541105 6 log.go:172] (0xc0049e91e0) (0xc001498e60) Stream added, broadcasting: 3 I0528 21:52:24.542112 6 log.go:172] (0xc0049e91e0) Reply frame received for 3 I0528 21:52:24.542143 6 log.go:172] (0xc0049e91e0) (0xc0029b3900) Create stream I0528 21:52:24.542152 6 log.go:172] (0xc0049e91e0) (0xc0029b3900) Stream added, broadcasting: 5 I0528 21:52:24.542811 6 log.go:172] (0xc0049e91e0) Reply frame received for 5 I0528 21:52:24.611275 6 log.go:172] (0xc0049e91e0) Data frame received for 5 I0528 21:52:24.611328 6 log.go:172] (0xc0029b3900) (5) Data frame handling I0528 21:52:24.611364 6 log.go:172] (0xc0049e91e0) Data frame received for 3 I0528 21:52:24.611387 6 log.go:172] (0xc001498e60) (3) Data frame handling I0528 21:52:24.611401 6 log.go:172] (0xc001498e60) (3) Data frame sent I0528 21:52:24.611569 6 log.go:172] (0xc0049e91e0) Data frame received for 3 I0528 21:52:24.611604 6 log.go:172] (0xc001498e60) (3) Data frame handling I0528 21:52:24.613301 6 log.go:172] (0xc0049e91e0) Data frame received for 1 I0528 21:52:24.613325 6 log.go:172] (0xc0016ea0a0) (1) Data frame handling I0528 21:52:24.613337 6 log.go:172] (0xc0016ea0a0) (1) Data frame sent I0528 21:52:24.613352 6 log.go:172] (0xc0049e91e0) (0xc0016ea0a0) Stream removed, broadcasting: 1 I0528 21:52:24.613368 6 log.go:172] (0xc0049e91e0) Go away received I0528 21:52:24.613441 6 log.go:172] (0xc0049e91e0) (0xc0016ea0a0) Stream removed, broadcasting: 1 I0528 21:52:24.613456 6 log.go:172] (0xc0049e91e0) (0xc001498e60) Stream removed, broadcasting: 3 I0528 21:52:24.613464 6 log.go:172] (0xc0049e91e0) (0xc0029b3900) Stream removed, broadcasting: 5 May 28 21:52:24.613: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:52:24.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-5955" for this suite. • [SLOW TEST:11.155 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2215,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:52:24.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 28 21:52:24.702: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 28 21:52:24.834: INFO: stderr: "" May 28 21:52:24.834: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:52:24.834: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7088" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":137,"skipped":2245,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:52:24.842: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 21:52:25.615: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 21:52:27.864: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299545, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299545, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299545, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299545, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 21:52:30.949: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:52:43.168: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3021" for this suite. STEP: Destroying namespace "webhook-3021-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.446 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":138,"skipped":2276,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:52:43.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 21:52:43.369: INFO: Waiting up to 5m0s for pod "downwardapi-volume-0f83b0df-1765-40a9-bcbe-94ca14192d18" in namespace "downward-api-3035" to be "success or failure" May 28 21:52:43.379: INFO: Pod "downwardapi-volume-0f83b0df-1765-40a9-bcbe-94ca14192d18": Phase="Pending", Reason="", readiness=false. Elapsed: 9.996274ms May 28 21:52:45.510: INFO: Pod "downwardapi-volume-0f83b0df-1765-40a9-bcbe-94ca14192d18": Phase="Pending", Reason="", readiness=false. Elapsed: 2.141226669s May 28 21:52:47.515: INFO: Pod "downwardapi-volume-0f83b0df-1765-40a9-bcbe-94ca14192d18": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.145497762s STEP: Saw pod success May 28 21:52:47.515: INFO: Pod "downwardapi-volume-0f83b0df-1765-40a9-bcbe-94ca14192d18" satisfied condition "success or failure" May 28 21:52:47.518: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-0f83b0df-1765-40a9-bcbe-94ca14192d18 container client-container: STEP: delete the pod May 28 21:52:47.578: INFO: Waiting for pod downwardapi-volume-0f83b0df-1765-40a9-bcbe-94ca14192d18 to disappear May 28 21:52:47.613: INFO: Pod downwardapi-volume-0f83b0df-1765-40a9-bcbe-94ca14192d18 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:52:47.613: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3035" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":139,"skipped":2301,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:52:47.620: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 28 21:52:51.708: INFO: &Pod{ObjectMeta:{send-events-70181708-1cc4-4ad5-8752-826d2bbe0eb6 events-3099 /api/v1/namespaces/events-3099/pods/send-events-70181708-1cc4-4ad5-8752-826d2bbe0eb6 5dd2d585-fd92-428b-a108-dee53490cf4e 19906922 0 2020-05-28 21:52:47 +0000 UTC map[name:foo time:686607061] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-b6ht9,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-b6ht9,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-b6ht9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 21:52:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 21:52:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 21:52:50 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 21:52:47 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.55,StartTime:2020-05-28 21:52:47 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-28 21:52:49 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://d1a9addb333af0a6da2e0f96b6a6bfa728cc79cfbb113ec8659aa6cacbf9070d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.55,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 28 21:52:53.714: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 28 21:52:55.717: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:52:55.722: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-3099" for this suite. • [SLOW TEST:8.173 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":140,"skipped":2314,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:52:55.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-3576d9dc-875c-484a-9325-5cfb0c02e8c3 STEP: Creating a pod to test consume configMaps May 28 21:52:55.869: INFO: Waiting up to 5m0s for pod "pod-configmaps-d00a6ced-9833-494f-ada8-9f8f30cc1317" in namespace "configmap-979" to be "success or failure" May 28 21:52:55.885: INFO: Pod "pod-configmaps-d00a6ced-9833-494f-ada8-9f8f30cc1317": Phase="Pending", Reason="", readiness=false. Elapsed: 16.556174ms May 28 21:52:57.890: INFO: Pod "pod-configmaps-d00a6ced-9833-494f-ada8-9f8f30cc1317": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021452627s May 28 21:52:59.894: INFO: Pod "pod-configmaps-d00a6ced-9833-494f-ada8-9f8f30cc1317": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.025401197s STEP: Saw pod success May 28 21:52:59.894: INFO: Pod "pod-configmaps-d00a6ced-9833-494f-ada8-9f8f30cc1317" satisfied condition "success or failure" May 28 21:52:59.899: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-d00a6ced-9833-494f-ada8-9f8f30cc1317 container configmap-volume-test: STEP: delete the pod May 28 21:52:59.941: INFO: Waiting for pod pod-configmaps-d00a6ced-9833-494f-ada8-9f8f30cc1317 to disappear May 28 21:53:00.055: INFO: Pod pod-configmaps-d00a6ced-9833-494f-ada8-9f8f30cc1317 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:53:00.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-979" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":141,"skipped":2320,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:53:00.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-7aaa0de8-d7f3-403a-83d8-35fcce015f9c STEP: Creating a pod to test consume secrets May 28 21:53:00.312: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4b085b76-f107-4853-8881-c407273eaf23" in namespace "projected-1791" to be "success or failure" May 28 21:53:00.475: INFO: Pod "pod-projected-secrets-4b085b76-f107-4853-8881-c407273eaf23": Phase="Pending", Reason="", readiness=false. Elapsed: 162.85708ms May 28 21:53:02.529: INFO: Pod "pod-projected-secrets-4b085b76-f107-4853-8881-c407273eaf23": Phase="Pending", Reason="", readiness=false. Elapsed: 2.217450942s May 28 21:53:04.533: INFO: Pod "pod-projected-secrets-4b085b76-f107-4853-8881-c407273eaf23": Phase="Pending", Reason="", readiness=false. Elapsed: 4.221501989s May 28 21:53:06.538: INFO: Pod "pod-projected-secrets-4b085b76-f107-4853-8881-c407273eaf23": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.226407975s STEP: Saw pod success May 28 21:53:06.538: INFO: Pod "pod-projected-secrets-4b085b76-f107-4853-8881-c407273eaf23" satisfied condition "success or failure" May 28 21:53:06.542: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-4b085b76-f107-4853-8881-c407273eaf23 container projected-secret-volume-test: STEP: delete the pod May 28 21:53:06.567: INFO: Waiting for pod pod-projected-secrets-4b085b76-f107-4853-8881-c407273eaf23 to disappear May 28 21:53:06.630: INFO: Pod pod-projected-secrets-4b085b76-f107-4853-8881-c407273eaf23 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:53:06.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1791" for this suite. • [SLOW TEST:6.541 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":142,"skipped":2321,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:53:06.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 28 21:53:06.726: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-8607 -- logs-generator --log-lines-total 100 --run-duration 20s' May 28 21:53:06.833: INFO: stderr: "" May 28 21:53:06.833: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 28 21:53:06.833: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 28 21:53:06.833: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8607" to be "running and ready, or succeeded" May 28 21:53:06.841: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 7.34658ms May 28 21:53:08.846: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012242697s May 28 21:53:10.849: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.015371526s May 28 21:53:10.849: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 28 21:53:10.849: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 28 21:53:10.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8607' May 28 21:53:10.969: INFO: stderr: "" May 28 21:53:10.969: INFO: stdout: "I0528 21:53:09.430225 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/mncr 499\nI0528 21:53:09.630372 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/5lr 431\nI0528 21:53:09.830409 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/9hzb 498\nI0528 21:53:10.030528 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/km2 419\nI0528 21:53:10.230423 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/hb6k 482\nI0528 21:53:10.430421 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/tt5v 432\nI0528 21:53:10.630435 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/jhfj 304\nI0528 21:53:10.830474 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/6kjw 492\n" STEP: limiting log lines May 28 21:53:10.969: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8607 --tail=1' May 28 21:53:11.084: INFO: stderr: "" May 28 21:53:11.084: INFO: stdout: "I0528 21:53:11.030363 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/z7pb 425\n" May 28 21:53:11.084: INFO: got output "I0528 21:53:11.030363 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/z7pb 425\n" STEP: limiting log bytes May 28 21:53:11.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8607 --limit-bytes=1' May 28 21:53:11.189: INFO: stderr: "" May 28 21:53:11.189: INFO: stdout: "I" May 28 21:53:11.189: INFO: got output "I" STEP: exposing timestamps May 28 21:53:11.189: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8607 --tail=1 --timestamps' May 28 21:53:11.294: INFO: stderr: "" May 28 21:53:11.294: INFO: stdout: "2020-05-28T21:53:11.230542071Z I0528 21:53:11.230392 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/ssw 311\n" May 28 21:53:11.294: INFO: got output "2020-05-28T21:53:11.230542071Z I0528 21:53:11.230392 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/ssw 311\n" STEP: restricting to a time range May 28 21:53:13.794: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8607 --since=1s' May 28 21:53:13.930: INFO: stderr: "" May 28 21:53:13.930: INFO: stdout: "I0528 21:53:13.030433 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/4ng 206\nI0528 21:53:13.230385 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/2zs 279\nI0528 21:53:13.430416 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/9x84 296\nI0528 21:53:13.630427 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/jbn 244\nI0528 21:53:13.830438 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/wj6 496\n" May 28 21:53:13.930: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8607 --since=24h' May 28 21:53:14.038: INFO: stderr: "" May 28 21:53:14.038: INFO: stdout: "I0528 21:53:09.430225 1 logs_generator.go:76] 0 GET /api/v1/namespaces/default/pods/mncr 499\nI0528 21:53:09.630372 1 logs_generator.go:76] 1 POST /api/v1/namespaces/kube-system/pods/5lr 431\nI0528 21:53:09.830409 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/ns/pods/9hzb 498\nI0528 21:53:10.030528 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/default/pods/km2 419\nI0528 21:53:10.230423 1 logs_generator.go:76] 4 GET /api/v1/namespaces/kube-system/pods/hb6k 482\nI0528 21:53:10.430421 1 logs_generator.go:76] 5 PUT /api/v1/namespaces/default/pods/tt5v 432\nI0528 21:53:10.630435 1 logs_generator.go:76] 6 GET /api/v1/namespaces/kube-system/pods/jhfj 304\nI0528 21:53:10.830474 1 logs_generator.go:76] 7 PUT /api/v1/namespaces/ns/pods/6kjw 492\nI0528 21:53:11.030363 1 logs_generator.go:76] 8 POST /api/v1/namespaces/default/pods/z7pb 425\nI0528 21:53:11.230392 1 logs_generator.go:76] 9 POST /api/v1/namespaces/ns/pods/ssw 311\nI0528 21:53:11.430400 1 logs_generator.go:76] 10 POST /api/v1/namespaces/ns/pods/l4c8 517\nI0528 21:53:11.630444 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/cqm8 251\nI0528 21:53:11.830410 1 logs_generator.go:76] 12 PUT /api/v1/namespaces/default/pods/n24x 429\nI0528 21:53:12.030427 1 logs_generator.go:76] 13 PUT /api/v1/namespaces/default/pods/sk78 275\nI0528 21:53:12.230436 1 logs_generator.go:76] 14 PUT /api/v1/namespaces/kube-system/pods/jqj7 532\nI0528 21:53:12.430383 1 logs_generator.go:76] 15 PUT /api/v1/namespaces/kube-system/pods/zs8 570\nI0528 21:53:12.630395 1 logs_generator.go:76] 16 GET /api/v1/namespaces/ns/pods/4dp 486\nI0528 21:53:12.830432 1 logs_generator.go:76] 17 GET /api/v1/namespaces/default/pods/9kp 236\nI0528 21:53:13.030433 1 logs_generator.go:76] 18 GET /api/v1/namespaces/default/pods/4ng 206\nI0528 21:53:13.230385 1 logs_generator.go:76] 19 PUT /api/v1/namespaces/kube-system/pods/2zs 279\nI0528 21:53:13.430416 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/9x84 296\nI0528 21:53:13.630427 1 logs_generator.go:76] 21 GET /api/v1/namespaces/ns/pods/jbn 244\nI0528 21:53:13.830438 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/kube-system/pods/wj6 496\nI0528 21:53:14.030460 1 logs_generator.go:76] 23 POST /api/v1/namespaces/ns/pods/96z7 461\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 28 21:53:14.038: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8607' May 28 21:53:19.244: INFO: stderr: "" May 28 21:53:19.244: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:53:19.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8607" for this suite. • [SLOW TEST:12.638 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":143,"skipped":2324,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:53:19.277: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-4116 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-4116 STEP: Creating statefulset with conflicting port in namespace statefulset-4116 STEP: Waiting until pod test-pod will start running in namespace statefulset-4116 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-4116 May 28 21:53:23.436: INFO: Observed stateful pod in namespace: statefulset-4116, name: ss-0, uid: 140a1eba-8e54-4478-a9f3-c8bd72d20050, status phase: Pending. Waiting for statefulset controller to delete. May 28 21:53:24.007: INFO: Observed stateful pod in namespace: statefulset-4116, name: ss-0, uid: 140a1eba-8e54-4478-a9f3-c8bd72d20050, status phase: Failed. Waiting for statefulset controller to delete. May 28 21:53:24.035: INFO: Observed stateful pod in namespace: statefulset-4116, name: ss-0, uid: 140a1eba-8e54-4478-a9f3-c8bd72d20050, status phase: Failed. Waiting for statefulset controller to delete. May 28 21:53:24.046: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-4116 STEP: Removing pod with conflicting port in namespace statefulset-4116 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-4116 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 28 21:53:30.219: INFO: Deleting all statefulset in ns statefulset-4116 May 28 21:53:30.222: INFO: Scaling statefulset ss to 0 May 28 21:53:40.253: INFO: Waiting for statefulset status.replicas updated to 0 May 28 21:53:40.256: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:53:40.272: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-4116" for this suite. • [SLOW TEST:21.002 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":144,"skipped":2336,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:53:40.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 28 21:53:40.362: INFO: Waiting up to 5m0s for pod "downward-api-3cd11817-aef8-4c85-ba78-4528dd114bbb" in namespace "downward-api-5632" to be "success or failure" May 28 21:53:40.384: INFO: Pod "downward-api-3cd11817-aef8-4c85-ba78-4528dd114bbb": Phase="Pending", Reason="", readiness=false. Elapsed: 21.904321ms May 28 21:53:42.637: INFO: Pod "downward-api-3cd11817-aef8-4c85-ba78-4528dd114bbb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.275543745s May 28 21:53:44.642: INFO: Pod "downward-api-3cd11817-aef8-4c85-ba78-4528dd114bbb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.279990496s STEP: Saw pod success May 28 21:53:44.642: INFO: Pod "downward-api-3cd11817-aef8-4c85-ba78-4528dd114bbb" satisfied condition "success or failure" May 28 21:53:44.644: INFO: Trying to get logs from node jerma-worker2 pod downward-api-3cd11817-aef8-4c85-ba78-4528dd114bbb container dapi-container: STEP: delete the pod May 28 21:53:44.660: INFO: Waiting for pod downward-api-3cd11817-aef8-4c85-ba78-4528dd114bbb to disappear May 28 21:53:44.665: INFO: Pod downward-api-3cd11817-aef8-4c85-ba78-4528dd114bbb no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:53:44.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5632" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":145,"skipped":2369,"failed":0} SSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:53:44.672: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 28 21:53:44.752: INFO: Waiting up to 5m0s for pod "pod-615f5c6d-fab1-4ae2-94fd-7c6e684acd4b" in namespace "emptydir-8320" to be "success or failure" May 28 21:53:44.768: INFO: Pod "pod-615f5c6d-fab1-4ae2-94fd-7c6e684acd4b": Phase="Pending", Reason="", readiness=false. Elapsed: 16.616939ms May 28 21:53:46.780: INFO: Pod "pod-615f5c6d-fab1-4ae2-94fd-7c6e684acd4b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.027951223s May 28 21:53:48.784: INFO: Pod "pod-615f5c6d-fab1-4ae2-94fd-7c6e684acd4b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.032749971s STEP: Saw pod success May 28 21:53:48.784: INFO: Pod "pod-615f5c6d-fab1-4ae2-94fd-7c6e684acd4b" satisfied condition "success or failure" May 28 21:53:48.788: INFO: Trying to get logs from node jerma-worker2 pod pod-615f5c6d-fab1-4ae2-94fd-7c6e684acd4b container test-container: STEP: delete the pod May 28 21:53:48.824: INFO: Waiting for pod pod-615f5c6d-fab1-4ae2-94fd-7c6e684acd4b to disappear May 28 21:53:48.839: INFO: Pod pod-615f5c6d-fab1-4ae2-94fd-7c6e684acd4b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:53:48.839: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-8320" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2376,"failed":0} SSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:53:48.847: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-4345 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-4345 STEP: creating replication controller externalsvc in namespace services-4345 I0528 21:53:49.114654 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-4345, replica count: 2 I0528 21:53:52.165261 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0528 21:53:55.165455 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 28 21:53:55.244: INFO: Creating new exec pod May 28 21:53:59.278: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-4345 execpodfmwhs -- /bin/sh -x -c nslookup clusterip-service' May 28 21:53:59.604: INFO: stderr: "I0528 21:53:59.403787 2433 log.go:172] (0xc0000f5550) (0xc0004c1c20) Create stream\nI0528 21:53:59.403839 2433 log.go:172] (0xc0000f5550) (0xc0004c1c20) Stream added, broadcasting: 1\nI0528 21:53:59.406019 2433 log.go:172] (0xc0000f5550) Reply frame received for 1\nI0528 21:53:59.406068 2433 log.go:172] (0xc0000f5550) (0xc0004da000) Create stream\nI0528 21:53:59.406092 2433 log.go:172] (0xc0000f5550) (0xc0004da000) Stream added, broadcasting: 3\nI0528 21:53:59.407095 2433 log.go:172] (0xc0000f5550) Reply frame received for 3\nI0528 21:53:59.407121 2433 log.go:172] (0xc0000f5550) (0xc0004c1d60) Create stream\nI0528 21:53:59.407130 2433 log.go:172] (0xc0000f5550) (0xc0004c1d60) Stream added, broadcasting: 5\nI0528 21:53:59.408172 2433 log.go:172] (0xc0000f5550) Reply frame received for 5\nI0528 21:53:59.503380 2433 log.go:172] (0xc0000f5550) Data frame received for 5\nI0528 21:53:59.503411 2433 log.go:172] (0xc0004c1d60) (5) Data frame handling\nI0528 21:53:59.503476 2433 log.go:172] (0xc0004c1d60) (5) Data frame sent\n+ nslookup clusterip-service\nI0528 21:53:59.593006 2433 log.go:172] (0xc0000f5550) Data frame received for 3\nI0528 21:53:59.593041 2433 log.go:172] (0xc0004da000) (3) Data frame handling\nI0528 21:53:59.593062 2433 log.go:172] (0xc0004da000) (3) Data frame sent\nI0528 21:53:59.594258 2433 log.go:172] (0xc0000f5550) Data frame received for 3\nI0528 21:53:59.594301 2433 log.go:172] (0xc0004da000) (3) Data frame handling\nI0528 21:53:59.594334 2433 log.go:172] (0xc0004da000) (3) Data frame sent\nI0528 21:53:59.594936 2433 log.go:172] (0xc0000f5550) Data frame received for 5\nI0528 21:53:59.594994 2433 log.go:172] (0xc0004c1d60) (5) Data frame handling\nI0528 21:53:59.595040 2433 log.go:172] (0xc0000f5550) Data frame received for 3\nI0528 21:53:59.595062 2433 log.go:172] (0xc0004da000) (3) Data frame handling\nI0528 21:53:59.597359 2433 log.go:172] (0xc0000f5550) Data frame received for 1\nI0528 21:53:59.597395 2433 log.go:172] (0xc0004c1c20) (1) Data frame handling\nI0528 21:53:59.597415 2433 log.go:172] (0xc0004c1c20) (1) Data frame sent\nI0528 21:53:59.597435 2433 log.go:172] (0xc0000f5550) (0xc0004c1c20) Stream removed, broadcasting: 1\nI0528 21:53:59.597464 2433 log.go:172] (0xc0000f5550) Go away received\nI0528 21:53:59.597944 2433 log.go:172] (0xc0000f5550) (0xc0004c1c20) Stream removed, broadcasting: 1\nI0528 21:53:59.597984 2433 log.go:172] (0xc0000f5550) (0xc0004da000) Stream removed, broadcasting: 3\nI0528 21:53:59.598010 2433 log.go:172] (0xc0000f5550) (0xc0004c1d60) Stream removed, broadcasting: 5\n" May 28 21:53:59.604: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-4345.svc.cluster.local\tcanonical name = externalsvc.services-4345.svc.cluster.local.\nName:\texternalsvc.services-4345.svc.cluster.local\nAddress: 10.108.229.160\n\n" STEP: deleting ReplicationController externalsvc in namespace services-4345, will wait for the garbage collector to delete the pods May 28 21:53:59.665: INFO: Deleting ReplicationController externalsvc took: 6.966229ms May 28 21:53:59.765: INFO: Terminating ReplicationController externalsvc pods took: 100.273637ms May 28 21:54:04.116: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:54:04.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-4345" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:15.296 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":147,"skipped":2384,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:54:04.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 28 21:54:04.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 28 21:54:04.548: INFO: stderr: "" May 28 21:54:04.548: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:54:04.548: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-101" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":148,"skipped":2387,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:54:04.556: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-bfkj STEP: Creating a pod to test atomic-volume-subpath May 28 21:54:04.694: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-bfkj" in namespace "subpath-125" to be "success or failure" May 28 21:54:04.697: INFO: Pod "pod-subpath-test-configmap-bfkj": Phase="Pending", Reason="", readiness=false. Elapsed: 3.392505ms May 28 21:54:06.701: INFO: Pod "pod-subpath-test-configmap-bfkj": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007490148s May 28 21:54:08.705: INFO: Pod "pod-subpath-test-configmap-bfkj": Phase="Running", Reason="", readiness=true. Elapsed: 4.011584661s May 28 21:54:10.710: INFO: Pod "pod-subpath-test-configmap-bfkj": Phase="Running", Reason="", readiness=true. Elapsed: 6.016330414s May 28 21:54:12.714: INFO: Pod "pod-subpath-test-configmap-bfkj": Phase="Running", Reason="", readiness=true. Elapsed: 8.020760126s May 28 21:54:14.718: INFO: Pod "pod-subpath-test-configmap-bfkj": Phase="Running", Reason="", readiness=true. Elapsed: 10.02484085s May 28 21:54:16.723: INFO: Pod "pod-subpath-test-configmap-bfkj": Phase="Running", Reason="", readiness=true. Elapsed: 12.029357138s May 28 21:54:18.727: INFO: Pod "pod-subpath-test-configmap-bfkj": Phase="Running", Reason="", readiness=true. Elapsed: 14.032993777s May 28 21:54:20.732: INFO: Pod "pod-subpath-test-configmap-bfkj": Phase="Running", Reason="", readiness=true. Elapsed: 16.038141114s May 28 21:54:22.740: INFO: Pod "pod-subpath-test-configmap-bfkj": Phase="Running", Reason="", readiness=true. Elapsed: 18.046153222s May 28 21:54:24.744: INFO: Pod "pod-subpath-test-configmap-bfkj": Phase="Running", Reason="", readiness=true. Elapsed: 20.050794247s May 28 21:54:26.749: INFO: Pod "pod-subpath-test-configmap-bfkj": Phase="Running", Reason="", readiness=true. Elapsed: 22.055227604s May 28 21:54:28.754: INFO: Pod "pod-subpath-test-configmap-bfkj": Phase="Running", Reason="", readiness=true. Elapsed: 24.059930125s May 28 21:54:30.758: INFO: Pod "pod-subpath-test-configmap-bfkj": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.064589966s STEP: Saw pod success May 28 21:54:30.758: INFO: Pod "pod-subpath-test-configmap-bfkj" satisfied condition "success or failure" May 28 21:54:30.761: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-bfkj container test-container-subpath-configmap-bfkj: STEP: delete the pod May 28 21:54:30.789: INFO: Waiting for pod pod-subpath-test-configmap-bfkj to disappear May 28 21:54:30.791: INFO: Pod pod-subpath-test-configmap-bfkj no longer exists STEP: Deleting pod pod-subpath-test-configmap-bfkj May 28 21:54:30.791: INFO: Deleting pod "pod-subpath-test-configmap-bfkj" in namespace "subpath-125" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:54:30.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-125" for this suite. • [SLOW TEST:26.244 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":149,"skipped":2394,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:54:30.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 21:54:31.401: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 21:54:33.411: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299671, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299671, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299671, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299671, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 21:54:36.470: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:54:36.907: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3186" for this suite. STEP: Destroying namespace "webhook-3186-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.239 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":150,"skipped":2399,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:54:37.039: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 21:54:38.555: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 21:54:40.566: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299678, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299678, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299678, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726299678, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 21:54:43.619: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:54:43.747: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7443" for this suite. STEP: Destroying namespace "webhook-7443-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.826 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":151,"skipped":2399,"failed":0} [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:54:43.865: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-8b7dd725-37a6-4366-9da2-32dcef3dbfb8 STEP: Creating a pod to test consume secrets May 28 21:54:43.964: INFO: Waiting up to 5m0s for pod "pod-secrets-b5878c1e-86a9-4a82-8986-4e5bb914ed8f" in namespace "secrets-6089" to be "success or failure" May 28 21:54:43.991: INFO: Pod "pod-secrets-b5878c1e-86a9-4a82-8986-4e5bb914ed8f": Phase="Pending", Reason="", readiness=false. Elapsed: 26.162335ms May 28 21:54:46.031: INFO: Pod "pod-secrets-b5878c1e-86a9-4a82-8986-4e5bb914ed8f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.066754651s May 28 21:54:48.035: INFO: Pod "pod-secrets-b5878c1e-86a9-4a82-8986-4e5bb914ed8f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.070625267s STEP: Saw pod success May 28 21:54:48.035: INFO: Pod "pod-secrets-b5878c1e-86a9-4a82-8986-4e5bb914ed8f" satisfied condition "success or failure" May 28 21:54:48.038: INFO: Trying to get logs from node jerma-worker pod pod-secrets-b5878c1e-86a9-4a82-8986-4e5bb914ed8f container secret-volume-test: STEP: delete the pod May 28 21:54:48.078: INFO: Waiting for pod pod-secrets-b5878c1e-86a9-4a82-8986-4e5bb914ed8f to disappear May 28 21:54:48.356: INFO: Pod pod-secrets-b5878c1e-86a9-4a82-8986-4e5bb914ed8f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:54:48.357: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6089" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":152,"skipped":2399,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:54:48.384: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:54:48.506: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 28 21:54:48.554: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:54:48.577: INFO: Number of nodes with available pods: 0 May 28 21:54:48.577: INFO: Node jerma-worker is running more than one daemon pod May 28 21:54:49.583: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:54:49.588: INFO: Number of nodes with available pods: 0 May 28 21:54:49.588: INFO: Node jerma-worker is running more than one daemon pod May 28 21:54:50.687: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:54:51.279: INFO: Number of nodes with available pods: 0 May 28 21:54:51.279: INFO: Node jerma-worker is running more than one daemon pod May 28 21:54:51.582: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:54:51.586: INFO: Number of nodes with available pods: 0 May 28 21:54:51.586: INFO: Node jerma-worker is running more than one daemon pod May 28 21:54:52.583: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:54:52.586: INFO: Number of nodes with available pods: 0 May 28 21:54:52.586: INFO: Node jerma-worker is running more than one daemon pod May 28 21:54:53.581: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:54:53.584: INFO: Number of nodes with available pods: 2 May 28 21:54:53.584: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 28 21:54:53.631: INFO: Wrong image for pod: daemon-set-2vw77. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:54:53.632: INFO: Wrong image for pod: daemon-set-g5fbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:54:53.668: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:54:54.673: INFO: Wrong image for pod: daemon-set-2vw77. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:54:54.673: INFO: Wrong image for pod: daemon-set-g5fbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:54:54.678: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:54:55.672: INFO: Wrong image for pod: daemon-set-2vw77. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:54:55.672: INFO: Wrong image for pod: daemon-set-g5fbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:54:55.676: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:54:56.673: INFO: Wrong image for pod: daemon-set-2vw77. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:54:56.673: INFO: Wrong image for pod: daemon-set-g5fbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:54:56.677: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:54:57.673: INFO: Wrong image for pod: daemon-set-2vw77. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:54:57.673: INFO: Pod daemon-set-2vw77 is not available May 28 21:54:57.673: INFO: Wrong image for pod: daemon-set-g5fbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:54:57.676: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:54:58.673: INFO: Wrong image for pod: daemon-set-2vw77. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:54:58.673: INFO: Pod daemon-set-2vw77 is not available May 28 21:54:58.673: INFO: Wrong image for pod: daemon-set-g5fbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:54:58.677: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:54:59.673: INFO: Wrong image for pod: daemon-set-g5fbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:54:59.673: INFO: Pod daemon-set-jg4c8 is not available May 28 21:54:59.677: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:55:00.673: INFO: Wrong image for pod: daemon-set-g5fbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:55:00.673: INFO: Pod daemon-set-jg4c8 is not available May 28 21:55:00.677: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:55:01.676: INFO: Wrong image for pod: daemon-set-g5fbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:55:01.676: INFO: Pod daemon-set-jg4c8 is not available May 28 21:55:01.681: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:55:02.673: INFO: Wrong image for pod: daemon-set-g5fbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:55:02.677: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:55:03.692: INFO: Wrong image for pod: daemon-set-g5fbm. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 28 21:55:03.692: INFO: Pod daemon-set-g5fbm is not available May 28 21:55:03.696: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:55:04.673: INFO: Pod daemon-set-r9kwv is not available May 28 21:55:04.678: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 28 21:55:04.682: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:55:04.704: INFO: Number of nodes with available pods: 1 May 28 21:55:04.704: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:55:05.713: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:55:05.836: INFO: Number of nodes with available pods: 1 May 28 21:55:05.836: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:55:06.709: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:55:06.712: INFO: Number of nodes with available pods: 1 May 28 21:55:06.712: INFO: Node jerma-worker2 is running more than one daemon pod May 28 21:55:07.710: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:55:07.713: INFO: Number of nodes with available pods: 2 May 28 21:55:07.714: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3599, will wait for the garbage collector to delete the pods May 28 21:55:07.800: INFO: Deleting DaemonSet.extensions daemon-set took: 6.124657ms May 28 21:55:08.101: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.56091ms May 28 21:55:19.548: INFO: Number of nodes with available pods: 0 May 28 21:55:19.548: INFO: Number of running nodes: 0, number of available pods: 0 May 28 21:55:19.551: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3599/daemonsets","resourceVersion":"19908031"},"items":null} May 28 21:55:19.554: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3599/pods","resourceVersion":"19908031"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:55:19.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3599" for this suite. • [SLOW TEST:31.183 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":153,"skipped":2403,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:55:19.568: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 28 21:55:19.688: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-9824' May 28 21:55:19.794: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 28 21:55:19.794: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 28 21:55:21.822: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-9824' May 28 21:55:22.076: INFO: stderr: "" May 28 21:55:22.076: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:55:22.076: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9824" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":154,"skipped":2405,"failed":0} SS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:55:22.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:55:22.423: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-3079 I0528 21:55:22.435725 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-3079, replica count: 1 I0528 21:55:23.486142 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0528 21:55:24.486354 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0528 21:55:25.486595 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0528 21:55:26.486819 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 28 21:55:26.621: INFO: Created: latency-svc-v2d6s May 28 21:55:26.640: INFO: Got endpoints: latency-svc-v2d6s [53.557256ms] May 28 21:55:26.688: INFO: Created: latency-svc-w45np May 28 21:55:26.730: INFO: Got endpoints: latency-svc-w45np [89.596478ms] May 28 21:55:26.767: INFO: Created: latency-svc-k6mv4 May 28 21:55:26.836: INFO: Got endpoints: latency-svc-k6mv4 [195.32754ms] May 28 21:55:26.837: INFO: Created: latency-svc-rdvgh May 28 21:55:26.839: INFO: Got endpoints: latency-svc-rdvgh [198.971079ms] May 28 21:55:26.875: INFO: Created: latency-svc-n76n5 May 28 21:55:26.894: INFO: Got endpoints: latency-svc-n76n5 [253.351292ms] May 28 21:55:26.915: INFO: Created: latency-svc-99g2j May 28 21:55:26.980: INFO: Got endpoints: latency-svc-99g2j [339.222261ms] May 28 21:55:27.024: INFO: Created: latency-svc-lj4lm May 28 21:55:27.060: INFO: Got endpoints: latency-svc-lj4lm [419.755333ms] May 28 21:55:27.129: INFO: Created: latency-svc-dhwd8 May 28 21:55:27.134: INFO: Got endpoints: latency-svc-dhwd8 [493.063784ms] May 28 21:55:27.221: INFO: Created: latency-svc-j48qg May 28 21:55:27.273: INFO: Got endpoints: latency-svc-j48qg [632.806616ms] May 28 21:55:27.368: INFO: Created: latency-svc-kt96l May 28 21:55:27.417: INFO: Got endpoints: latency-svc-kt96l [776.261173ms] May 28 21:55:27.467: INFO: Created: latency-svc-wrz4x May 28 21:55:27.475: INFO: Got endpoints: latency-svc-wrz4x [834.765963ms] May 28 21:55:27.503: INFO: Created: latency-svc-xmcnr May 28 21:55:27.511: INFO: Got endpoints: latency-svc-xmcnr [871.004115ms] May 28 21:55:27.576: INFO: Created: latency-svc-tdltr May 28 21:55:27.590: INFO: Got endpoints: latency-svc-tdltr [949.775205ms] May 28 21:55:27.619: INFO: Created: latency-svc-lp2pl May 28 21:55:27.633: INFO: Got endpoints: latency-svc-lp2pl [992.544538ms] May 28 21:55:27.660: INFO: Created: latency-svc-9ttj6 May 28 21:55:27.724: INFO: Got endpoints: latency-svc-9ttj6 [1.083352198s] May 28 21:55:27.755: INFO: Created: latency-svc-hd2wn May 28 21:55:27.772: INFO: Got endpoints: latency-svc-hd2wn [1.131020982s] May 28 21:55:27.797: INFO: Created: latency-svc-7j75h May 28 21:55:27.860: INFO: Got endpoints: latency-svc-7j75h [1.130211147s] May 28 21:55:27.895: INFO: Created: latency-svc-lf9hh May 28 21:55:27.916: INFO: Got endpoints: latency-svc-lf9hh [1.080757151s] May 28 21:55:27.960: INFO: Created: latency-svc-w7ltx May 28 21:55:28.040: INFO: Got endpoints: latency-svc-w7ltx [1.200271283s] May 28 21:55:28.097: INFO: Created: latency-svc-dckq5 May 28 21:55:28.109: INFO: Got endpoints: latency-svc-dckq5 [1.215274591s] May 28 21:55:28.183: INFO: Created: latency-svc-r4w2d May 28 21:55:28.193: INFO: Got endpoints: latency-svc-r4w2d [1.213609977s] May 28 21:55:28.230: INFO: Created: latency-svc-4mw9v May 28 21:55:28.327: INFO: Got endpoints: latency-svc-4mw9v [1.267077681s] May 28 21:55:28.379: INFO: Created: latency-svc-vr5mm May 28 21:55:28.398: INFO: Got endpoints: latency-svc-vr5mm [1.264709685s] May 28 21:55:28.471: INFO: Created: latency-svc-lhzcv May 28 21:55:28.476: INFO: Got endpoints: latency-svc-lhzcv [1.202478136s] May 28 21:55:28.511: INFO: Created: latency-svc-qvhsv May 28 21:55:28.553: INFO: Got endpoints: latency-svc-qvhsv [1.135954422s] May 28 21:55:28.615: INFO: Created: latency-svc-k7xsp May 28 21:55:28.621: INFO: Got endpoints: latency-svc-k7xsp [1.145900817s] May 28 21:55:28.649: INFO: Created: latency-svc-hd44x May 28 21:55:28.663: INFO: Got endpoints: latency-svc-hd44x [1.151448398s] May 28 21:55:28.697: INFO: Created: latency-svc-twh5t May 28 21:55:28.752: INFO: Got endpoints: latency-svc-twh5t [1.161960939s] May 28 21:55:28.805: INFO: Created: latency-svc-t7gjf May 28 21:55:28.820: INFO: Got endpoints: latency-svc-t7gjf [1.186887724s] May 28 21:55:28.841: INFO: Created: latency-svc-gxw2k May 28 21:55:28.902: INFO: Got endpoints: latency-svc-gxw2k [1.177827323s] May 28 21:55:28.906: INFO: Created: latency-svc-kbw8g May 28 21:55:28.923: INFO: Got endpoints: latency-svc-kbw8g [1.151316826s] May 28 21:55:28.949: INFO: Created: latency-svc-ht599 May 28 21:55:28.959: INFO: Got endpoints: latency-svc-ht599 [1.098690315s] May 28 21:55:28.992: INFO: Created: latency-svc-skjq6 May 28 21:55:29.057: INFO: Got endpoints: latency-svc-skjq6 [1.140943748s] May 28 21:55:29.062: INFO: Created: latency-svc-624ph May 28 21:55:29.080: INFO: Got endpoints: latency-svc-624ph [1.040445894s] May 28 21:55:29.099: INFO: Created: latency-svc-cgj55 May 28 21:55:29.116: INFO: Got endpoints: latency-svc-cgj55 [1.007447258s] May 28 21:55:29.140: INFO: Created: latency-svc-smjbq May 28 21:55:29.219: INFO: Got endpoints: latency-svc-smjbq [1.02563219s] May 28 21:55:29.230: INFO: Created: latency-svc-2lmdx May 28 21:55:29.255: INFO: Got endpoints: latency-svc-2lmdx [927.801706ms] May 28 21:55:29.285: INFO: Created: latency-svc-jzbgh May 28 21:55:29.351: INFO: Got endpoints: latency-svc-jzbgh [952.454936ms] May 28 21:55:29.374: INFO: Created: latency-svc-l5lj8 May 28 21:55:29.395: INFO: Got endpoints: latency-svc-l5lj8 [918.613814ms] May 28 21:55:29.416: INFO: Created: latency-svc-d4mw5 May 28 21:55:29.440: INFO: Got endpoints: latency-svc-d4mw5 [887.115864ms] May 28 21:55:29.501: INFO: Created: latency-svc-tr8b9 May 28 21:55:29.508: INFO: Got endpoints: latency-svc-tr8b9 [887.288068ms] May 28 21:55:29.543: INFO: Created: latency-svc-627bl May 28 21:55:29.557: INFO: Got endpoints: latency-svc-627bl [894.039412ms] May 28 21:55:29.657: INFO: Created: latency-svc-dtc2x May 28 21:55:29.666: INFO: Got endpoints: latency-svc-dtc2x [913.589622ms] May 28 21:55:29.716: INFO: Created: latency-svc-jklwf May 28 21:55:29.732: INFO: Got endpoints: latency-svc-jklwf [911.444612ms] May 28 21:55:29.807: INFO: Created: latency-svc-z7w27 May 28 21:55:29.836: INFO: Got endpoints: latency-svc-z7w27 [934.534519ms] May 28 21:55:29.837: INFO: Created: latency-svc-vfxkh May 28 21:55:29.866: INFO: Got endpoints: latency-svc-vfxkh [943.480601ms] May 28 21:55:29.904: INFO: Created: latency-svc-wmvsg May 28 21:55:29.950: INFO: Got endpoints: latency-svc-wmvsg [990.615859ms] May 28 21:55:29.992: INFO: Created: latency-svc-wrk7t May 28 21:55:30.010: INFO: Got endpoints: latency-svc-wrk7t [952.60264ms] May 28 21:55:30.047: INFO: Created: latency-svc-2c7hc May 28 21:55:30.094: INFO: Got endpoints: latency-svc-2c7hc [1.013795207s] May 28 21:55:30.113: INFO: Created: latency-svc-ldwfp May 28 21:55:30.127: INFO: Got endpoints: latency-svc-ldwfp [1.011013442s] May 28 21:55:30.154: INFO: Created: latency-svc-mfbww May 28 21:55:30.163: INFO: Got endpoints: latency-svc-mfbww [944.41717ms] May 28 21:55:30.249: INFO: Created: latency-svc-rcp2d May 28 21:55:30.252: INFO: Got endpoints: latency-svc-rcp2d [996.617326ms] May 28 21:55:30.292: INFO: Created: latency-svc-ph5g5 May 28 21:55:30.309: INFO: Got endpoints: latency-svc-ph5g5 [958.125208ms] May 28 21:55:30.389: INFO: Created: latency-svc-5kgjk May 28 21:55:30.395: INFO: Got endpoints: latency-svc-5kgjk [1.000456306s] May 28 21:55:30.454: INFO: Created: latency-svc-gtcsp May 28 21:55:30.466: INFO: Got endpoints: latency-svc-gtcsp [1.025885539s] May 28 21:55:30.531: INFO: Created: latency-svc-kpjwv May 28 21:55:30.534: INFO: Got endpoints: latency-svc-kpjwv [1.025643181s] May 28 21:55:30.586: INFO: Created: latency-svc-fx4db May 28 21:55:30.605: INFO: Got endpoints: latency-svc-fx4db [1.047764413s] May 28 21:55:30.628: INFO: Created: latency-svc-xvzlq May 28 21:55:30.680: INFO: Got endpoints: latency-svc-xvzlq [1.014312176s] May 28 21:55:30.694: INFO: Created: latency-svc-n2pn7 May 28 21:55:30.708: INFO: Got endpoints: latency-svc-n2pn7 [976.158572ms] May 28 21:55:30.729: INFO: Created: latency-svc-b7nl5 May 28 21:55:30.762: INFO: Got endpoints: latency-svc-b7nl5 [925.189001ms] May 28 21:55:30.836: INFO: Created: latency-svc-2m9tj May 28 21:55:30.840: INFO: Got endpoints: latency-svc-2m9tj [973.411101ms] May 28 21:55:30.861: INFO: Created: latency-svc-htjnq May 28 21:55:30.876: INFO: Got endpoints: latency-svc-htjnq [926.77363ms] May 28 21:55:30.904: INFO: Created: latency-svc-ppvxh May 28 21:55:30.928: INFO: Got endpoints: latency-svc-ppvxh [917.828669ms] May 28 21:55:30.986: INFO: Created: latency-svc-6hcjx May 28 21:55:31.017: INFO: Got endpoints: latency-svc-6hcjx [923.370412ms] May 28 21:55:31.018: INFO: Created: latency-svc-2zrk2 May 28 21:55:31.034: INFO: Got endpoints: latency-svc-2zrk2 [906.731466ms] May 28 21:55:31.062: INFO: Created: latency-svc-g7kcx May 28 21:55:31.159: INFO: Got endpoints: latency-svc-g7kcx [995.615277ms] May 28 21:55:31.168: INFO: Created: latency-svc-rw2gz May 28 21:55:31.203: INFO: Got endpoints: latency-svc-rw2gz [950.980963ms] May 28 21:55:31.246: INFO: Created: latency-svc-hrvnl May 28 21:55:31.318: INFO: Got endpoints: latency-svc-hrvnl [1.008540078s] May 28 21:55:31.341: INFO: Created: latency-svc-dhrsk May 28 21:55:31.359: INFO: Got endpoints: latency-svc-dhrsk [964.372136ms] May 28 21:55:31.383: INFO: Created: latency-svc-4krs6 May 28 21:55:31.402: INFO: Got endpoints: latency-svc-4krs6 [935.830048ms] May 28 21:55:31.470: INFO: Created: latency-svc-z8zml May 28 21:55:31.474: INFO: Got endpoints: latency-svc-z8zml [939.823575ms] May 28 21:55:31.503: INFO: Created: latency-svc-smv67 May 28 21:55:31.517: INFO: Got endpoints: latency-svc-smv67 [911.726537ms] May 28 21:55:31.557: INFO: Created: latency-svc-qlf8x May 28 21:55:31.608: INFO: Got endpoints: latency-svc-qlf8x [927.817317ms] May 28 21:55:31.623: INFO: Created: latency-svc-42zds May 28 21:55:31.637: INFO: Got endpoints: latency-svc-42zds [929.542487ms] May 28 21:55:31.659: INFO: Created: latency-svc-vth6h May 28 21:55:31.675: INFO: Got endpoints: latency-svc-vth6h [912.858566ms] May 28 21:55:31.708: INFO: Created: latency-svc-jvwgs May 28 21:55:31.764: INFO: Got endpoints: latency-svc-jvwgs [924.281403ms] May 28 21:55:31.797: INFO: Created: latency-svc-kbpdr May 28 21:55:31.813: INFO: Got endpoints: latency-svc-kbpdr [936.335742ms] May 28 21:55:31.833: INFO: Created: latency-svc-qdtj2 May 28 21:55:31.851: INFO: Got endpoints: latency-svc-qdtj2 [923.372187ms] May 28 21:55:31.914: INFO: Created: latency-svc-z89qs May 28 21:55:31.922: INFO: Got endpoints: latency-svc-z89qs [904.77955ms] May 28 21:55:31.978: INFO: Created: latency-svc-lhfxl May 28 21:55:31.994: INFO: Got endpoints: latency-svc-lhfxl [960.116181ms] May 28 21:55:32.064: INFO: Created: latency-svc-fdhgx May 28 21:55:32.067: INFO: Got endpoints: latency-svc-fdhgx [907.470825ms] May 28 21:55:32.202: INFO: Created: latency-svc-mjdxd May 28 21:55:32.210: INFO: Got endpoints: latency-svc-mjdxd [1.007536091s] May 28 21:55:32.241: INFO: Created: latency-svc-qjg62 May 28 21:55:32.259: INFO: Got endpoints: latency-svc-qjg62 [941.615901ms] May 28 21:55:32.289: INFO: Created: latency-svc-6bdrv May 28 21:55:32.345: INFO: Got endpoints: latency-svc-6bdrv [985.096193ms] May 28 21:55:32.373: INFO: Created: latency-svc-68jsh May 28 21:55:32.404: INFO: Got endpoints: latency-svc-68jsh [1.001744561s] May 28 21:55:32.439: INFO: Created: latency-svc-p5h9r May 28 21:55:32.500: INFO: Got endpoints: latency-svc-p5h9r [1.026417262s] May 28 21:55:32.502: INFO: Created: latency-svc-dxpwt May 28 21:55:32.513: INFO: Got endpoints: latency-svc-dxpwt [995.828076ms] May 28 21:55:32.565: INFO: Created: latency-svc-6lnqb May 28 21:55:32.598: INFO: Got endpoints: latency-svc-6lnqb [989.714924ms] May 28 21:55:32.704: INFO: Created: latency-svc-p76t7 May 28 21:55:32.721: INFO: Got endpoints: latency-svc-p76t7 [1.083826064s] May 28 21:55:32.752: INFO: Created: latency-svc-z79gh May 28 21:55:32.800: INFO: Got endpoints: latency-svc-z79gh [1.125174364s] May 28 21:55:32.839: INFO: Created: latency-svc-tbj62 May 28 21:55:32.854: INFO: Got endpoints: latency-svc-tbj62 [1.090093358s] May 28 21:55:32.885: INFO: Created: latency-svc-8hkjq May 28 21:55:32.944: INFO: Got endpoints: latency-svc-8hkjq [1.130723472s] May 28 21:55:32.967: INFO: Created: latency-svc-v9c4k May 28 21:55:32.975: INFO: Got endpoints: latency-svc-v9c4k [1.123765265s] May 28 21:55:33.003: INFO: Created: latency-svc-7khdq May 28 21:55:33.011: INFO: Got endpoints: latency-svc-7khdq [1.089103289s] May 28 21:55:33.040: INFO: Created: latency-svc-nqm5g May 28 21:55:33.099: INFO: Got endpoints: latency-svc-nqm5g [1.104777061s] May 28 21:55:33.123: INFO: Created: latency-svc-k9cmf May 28 21:55:33.139: INFO: Got endpoints: latency-svc-k9cmf [1.072231938s] May 28 21:55:33.170: INFO: Created: latency-svc-xh9dp May 28 21:55:33.273: INFO: Got endpoints: latency-svc-xh9dp [1.062221583s] May 28 21:55:33.291: INFO: Created: latency-svc-tkdhq May 28 21:55:33.307: INFO: Got endpoints: latency-svc-tkdhq [1.048104092s] May 28 21:55:33.357: INFO: Created: latency-svc-l6kcg May 28 21:55:33.411: INFO: Got endpoints: latency-svc-l6kcg [1.066202133s] May 28 21:55:33.423: INFO: Created: latency-svc-gm77s May 28 21:55:33.440: INFO: Got endpoints: latency-svc-gm77s [1.03627729s] May 28 21:55:33.464: INFO: Created: latency-svc-8pdgc May 28 21:55:33.476: INFO: Got endpoints: latency-svc-8pdgc [975.892491ms] May 28 21:55:33.591: INFO: Created: latency-svc-rjqjc May 28 21:55:33.593: INFO: Got endpoints: latency-svc-rjqjc [1.080317533s] May 28 21:55:33.627: INFO: Created: latency-svc-9wdjp May 28 21:55:33.645: INFO: Got endpoints: latency-svc-9wdjp [1.047129065s] May 28 21:55:33.680: INFO: Created: latency-svc-2vmgs May 28 21:55:33.734: INFO: Got endpoints: latency-svc-2vmgs [1.012553342s] May 28 21:55:33.764: INFO: Created: latency-svc-54xs7 May 28 21:55:33.784: INFO: Got endpoints: latency-svc-54xs7 [984.60851ms] May 28 21:55:33.812: INFO: Created: latency-svc-5xlcx May 28 21:55:33.826: INFO: Got endpoints: latency-svc-5xlcx [971.950191ms] May 28 21:55:33.879: INFO: Created: latency-svc-4vq2z May 28 21:55:33.927: INFO: Got endpoints: latency-svc-4vq2z [983.26464ms] May 28 21:55:33.962: INFO: Created: latency-svc-hzzh2 May 28 21:55:34.023: INFO: Got endpoints: latency-svc-hzzh2 [1.047485352s] May 28 21:55:34.028: INFO: Created: latency-svc-z2q2k May 28 21:55:34.038: INFO: Got endpoints: latency-svc-z2q2k [1.026270577s] May 28 21:55:34.071: INFO: Created: latency-svc-n9bgn May 28 21:55:34.092: INFO: Got endpoints: latency-svc-n9bgn [993.085634ms] May 28 21:55:34.120: INFO: Created: latency-svc-9m2nt May 28 21:55:34.165: INFO: Got endpoints: latency-svc-9m2nt [127.435881ms] May 28 21:55:34.179: INFO: Created: latency-svc-9vpsv May 28 21:55:34.195: INFO: Got endpoints: latency-svc-9vpsv [1.056058823s] May 28 21:55:34.220: INFO: Created: latency-svc-rkmqr May 28 21:55:34.238: INFO: Got endpoints: latency-svc-rkmqr [965.13053ms] May 28 21:55:34.303: INFO: Created: latency-svc-r2l45 May 28 21:55:34.310: INFO: Got endpoints: latency-svc-r2l45 [1.002652084s] May 28 21:55:34.347: INFO: Created: latency-svc-hph6f May 28 21:55:34.371: INFO: Got endpoints: latency-svc-hph6f [959.623987ms] May 28 21:55:34.399: INFO: Created: latency-svc-8dqdx May 28 21:55:34.453: INFO: Got endpoints: latency-svc-8dqdx [1.013288957s] May 28 21:55:34.491: INFO: Created: latency-svc-h9rbj May 28 21:55:34.521: INFO: Got endpoints: latency-svc-h9rbj [1.044652655s] May 28 21:55:34.615: INFO: Created: latency-svc-6n5fr May 28 21:55:34.647: INFO: Created: latency-svc-qqntw May 28 21:55:34.647: INFO: Got endpoints: latency-svc-6n5fr [1.053657678s] May 28 21:55:34.683: INFO: Got endpoints: latency-svc-qqntw [1.037936914s] May 28 21:55:34.712: INFO: Created: latency-svc-bg5p5 May 28 21:55:34.758: INFO: Got endpoints: latency-svc-bg5p5 [1.024551122s] May 28 21:55:34.784: INFO: Created: latency-svc-tctpk May 28 21:55:34.794: INFO: Got endpoints: latency-svc-tctpk [1.009608811s] May 28 21:55:34.824: INFO: Created: latency-svc-82q9g May 28 21:55:34.829: INFO: Got endpoints: latency-svc-82q9g [1.002845779s] May 28 21:55:34.857: INFO: Created: latency-svc-xxn9r May 28 21:55:34.908: INFO: Got endpoints: latency-svc-xxn9r [980.738084ms] May 28 21:55:34.922: INFO: Created: latency-svc-7dpd2 May 28 21:55:34.939: INFO: Got endpoints: latency-svc-7dpd2 [916.055477ms] May 28 21:55:34.959: INFO: Created: latency-svc-znc5m May 28 21:55:34.975: INFO: Got endpoints: latency-svc-znc5m [882.322089ms] May 28 21:55:34.994: INFO: Created: latency-svc-tpjrw May 28 21:55:35.005: INFO: Got endpoints: latency-svc-tpjrw [839.985652ms] May 28 21:55:35.064: INFO: Created: latency-svc-mxdfz May 28 21:55:35.071: INFO: Got endpoints: latency-svc-mxdfz [876.215371ms] May 28 21:55:35.096: INFO: Created: latency-svc-tk2pl May 28 21:55:35.114: INFO: Got endpoints: latency-svc-tk2pl [876.042025ms] May 28 21:55:35.139: INFO: Created: latency-svc-4gff2 May 28 21:55:35.201: INFO: Got endpoints: latency-svc-4gff2 [891.042894ms] May 28 21:55:35.235: INFO: Created: latency-svc-nt9t2 May 28 21:55:35.259: INFO: Got endpoints: latency-svc-nt9t2 [888.236377ms] May 28 21:55:35.351: INFO: Created: latency-svc-2wgkq May 28 21:55:35.367: INFO: Got endpoints: latency-svc-2wgkq [913.956379ms] May 28 21:55:35.390: INFO: Created: latency-svc-tcgxw May 28 21:55:35.410: INFO: Got endpoints: latency-svc-tcgxw [888.620862ms] May 28 21:55:35.433: INFO: Created: latency-svc-pnqmn May 28 21:55:35.483: INFO: Got endpoints: latency-svc-pnqmn [835.854286ms] May 28 21:55:35.504: INFO: Created: latency-svc-hr42q May 28 21:55:35.527: INFO: Got endpoints: latency-svc-hr42q [844.003926ms] May 28 21:55:35.565: INFO: Created: latency-svc-zzg9p May 28 21:55:35.573: INFO: Got endpoints: latency-svc-zzg9p [814.21823ms] May 28 21:55:35.627: INFO: Created: latency-svc-mnj7r May 28 21:55:35.633: INFO: Got endpoints: latency-svc-mnj7r [838.981284ms] May 28 21:55:35.672: INFO: Created: latency-svc-ltgfw May 28 21:55:35.782: INFO: Got endpoints: latency-svc-ltgfw [952.802473ms] May 28 21:55:35.785: INFO: Created: latency-svc-l8nw2 May 28 21:55:35.800: INFO: Got endpoints: latency-svc-l8nw2 [892.128975ms] May 28 21:55:35.828: INFO: Created: latency-svc-jz8k9 May 28 21:55:35.842: INFO: Got endpoints: latency-svc-jz8k9 [903.102223ms] May 28 21:55:35.870: INFO: Created: latency-svc-k7j4z May 28 21:55:35.919: INFO: Got endpoints: latency-svc-k7j4z [944.68157ms] May 28 21:55:35.936: INFO: Created: latency-svc-9crrg May 28 21:55:35.944: INFO: Got endpoints: latency-svc-9crrg [939.301224ms] May 28 21:55:35.966: INFO: Created: latency-svc-hq9t5 May 28 21:55:35.975: INFO: Got endpoints: latency-svc-hq9t5 [903.583731ms] May 28 21:55:36.008: INFO: Created: latency-svc-wjrt8 May 28 21:55:36.069: INFO: Got endpoints: latency-svc-wjrt8 [955.115585ms] May 28 21:55:36.073: INFO: Created: latency-svc-mld9k May 28 21:55:36.084: INFO: Got endpoints: latency-svc-mld9k [882.883262ms] May 28 21:55:36.140: INFO: Created: latency-svc-ws7xf May 28 21:55:36.151: INFO: Got endpoints: latency-svc-ws7xf [891.783921ms] May 28 21:55:36.213: INFO: Created: latency-svc-vhbv9 May 28 21:55:36.229: INFO: Got endpoints: latency-svc-vhbv9 [861.962661ms] May 28 21:55:36.270: INFO: Created: latency-svc-wld9z May 28 21:55:36.277: INFO: Got endpoints: latency-svc-wld9z [867.477414ms] May 28 21:55:36.357: INFO: Created: latency-svc-rv89c May 28 21:55:36.360: INFO: Got endpoints: latency-svc-rv89c [877.316449ms] May 28 21:55:36.416: INFO: Created: latency-svc-2r7fk May 28 21:55:36.446: INFO: Got endpoints: latency-svc-2r7fk [919.072517ms] May 28 21:55:36.489: INFO: Created: latency-svc-msj8s May 28 21:55:36.495: INFO: Got endpoints: latency-svc-msj8s [921.76379ms] May 28 21:55:36.518: INFO: Created: latency-svc-6vrd5 May 28 21:55:36.540: INFO: Got endpoints: latency-svc-6vrd5 [907.218594ms] May 28 21:55:36.560: INFO: Created: latency-svc-fhllk May 28 21:55:36.574: INFO: Got endpoints: latency-svc-fhllk [791.694281ms] May 28 21:55:36.621: INFO: Created: latency-svc-llrg9 May 28 21:55:36.637: INFO: Got endpoints: latency-svc-llrg9 [837.515605ms] May 28 21:55:36.704: INFO: Created: latency-svc-w6gzr May 28 21:55:36.765: INFO: Got endpoints: latency-svc-w6gzr [922.951423ms] May 28 21:55:36.790: INFO: Created: latency-svc-95kft May 28 21:55:36.803: INFO: Got endpoints: latency-svc-95kft [883.334772ms] May 28 21:55:36.825: INFO: Created: latency-svc-6vnq6 May 28 21:55:36.842: INFO: Got endpoints: latency-svc-6vnq6 [897.72132ms] May 28 21:55:36.908: INFO: Created: latency-svc-tkrxb May 28 21:55:36.913: INFO: Got endpoints: latency-svc-tkrxb [938.374095ms] May 28 21:55:36.956: INFO: Created: latency-svc-p7nc5 May 28 21:55:36.978: INFO: Got endpoints: latency-svc-p7nc5 [909.039188ms] May 28 21:55:36.998: INFO: Created: latency-svc-c28zd May 28 21:55:37.052: INFO: Got endpoints: latency-svc-c28zd [967.478841ms] May 28 21:55:37.064: INFO: Created: latency-svc-l7nfw May 28 21:55:37.075: INFO: Got endpoints: latency-svc-l7nfw [924.13792ms] May 28 21:55:37.100: INFO: Created: latency-svc-vfgv4 May 28 21:55:37.111: INFO: Got endpoints: latency-svc-vfgv4 [881.475067ms] May 28 21:55:37.136: INFO: Created: latency-svc-gw4cd May 28 21:55:37.148: INFO: Got endpoints: latency-svc-gw4cd [871.088922ms] May 28 21:55:37.201: INFO: Created: latency-svc-9m9jq May 28 21:55:37.249: INFO: Got endpoints: latency-svc-9m9jq [889.242053ms] May 28 21:55:37.430: INFO: Created: latency-svc-pllng May 28 21:55:37.435: INFO: Got endpoints: latency-svc-pllng [988.334994ms] May 28 21:55:37.491: INFO: Created: latency-svc-qswzx May 28 21:55:37.526: INFO: Got endpoints: latency-svc-qswzx [1.031710184s] May 28 21:55:37.596: INFO: Created: latency-svc-xs25h May 28 21:55:37.599: INFO: Got endpoints: latency-svc-xs25h [1.058937569s] May 28 21:55:37.664: INFO: Created: latency-svc-kcstc May 28 21:55:37.683: INFO: Got endpoints: latency-svc-kcstc [1.10920507s] May 28 21:55:37.746: INFO: Created: latency-svc-xsdkb May 28 21:55:37.764: INFO: Got endpoints: latency-svc-xsdkb [1.126163994s] May 28 21:55:37.802: INFO: Created: latency-svc-5cqvj May 28 21:55:37.822: INFO: Got endpoints: latency-svc-5cqvj [1.057318016s] May 28 21:55:37.878: INFO: Created: latency-svc-d26w6 May 28 21:55:37.888: INFO: Got endpoints: latency-svc-d26w6 [1.08525504s] May 28 21:55:37.916: INFO: Created: latency-svc-6f24x May 28 21:55:37.931: INFO: Got endpoints: latency-svc-6f24x [1.088377708s] May 28 21:55:37.976: INFO: Created: latency-svc-k6brt May 28 21:55:38.015: INFO: Got endpoints: latency-svc-k6brt [1.102131151s] May 28 21:55:38.030: INFO: Created: latency-svc-fnfzj May 28 21:55:38.040: INFO: Got endpoints: latency-svc-fnfzj [1.061057697s] May 28 21:55:38.077: INFO: Created: latency-svc-vp22r May 28 21:55:38.094: INFO: Got endpoints: latency-svc-vp22r [1.042404131s] May 28 21:55:38.154: INFO: Created: latency-svc-p8292 May 28 21:55:38.156: INFO: Got endpoints: latency-svc-p8292 [1.081029186s] May 28 21:55:38.187: INFO: Created: latency-svc-p294r May 28 21:55:38.196: INFO: Got endpoints: latency-svc-p294r [1.085494461s] May 28 21:55:38.222: INFO: Created: latency-svc-txvwv May 28 21:55:38.233: INFO: Got endpoints: latency-svc-txvwv [1.084999159s] May 28 21:55:38.303: INFO: Created: latency-svc-4tcsh May 28 21:55:38.353: INFO: Got endpoints: latency-svc-4tcsh [1.10359078s] May 28 21:55:38.395: INFO: Created: latency-svc-7kxl7 May 28 21:55:38.447: INFO: Got endpoints: latency-svc-7kxl7 [1.011745584s] May 28 21:55:38.462: INFO: Created: latency-svc-lllzd May 28 21:55:38.475: INFO: Got endpoints: latency-svc-lllzd [948.362875ms] May 28 21:55:38.501: INFO: Created: latency-svc-r579r May 28 21:55:38.512: INFO: Got endpoints: latency-svc-r579r [912.239065ms] May 28 21:55:38.533: INFO: Created: latency-svc-tlc5r May 28 21:55:38.615: INFO: Got endpoints: latency-svc-tlc5r [931.355415ms] May 28 21:55:38.617: INFO: Created: latency-svc-22zwk May 28 21:55:38.620: INFO: Got endpoints: latency-svc-22zwk [856.181582ms] May 28 21:55:38.684: INFO: Created: latency-svc-bt52j May 28 21:55:38.704: INFO: Got endpoints: latency-svc-bt52j [881.967664ms] May 28 21:55:38.773: INFO: Created: latency-svc-bzmx7 May 28 21:55:38.789: INFO: Got endpoints: latency-svc-bzmx7 [900.7074ms] May 28 21:55:38.816: INFO: Created: latency-svc-vjlrn May 28 21:55:38.831: INFO: Got endpoints: latency-svc-vjlrn [900.259799ms] May 28 21:55:38.902: INFO: Created: latency-svc-hb5r2 May 28 21:55:38.911: INFO: Got endpoints: latency-svc-hb5r2 [895.919032ms] May 28 21:55:39.381: INFO: Created: latency-svc-wdxjz May 28 21:55:39.836: INFO: Created: latency-svc-wm864 May 28 21:55:39.836: INFO: Got endpoints: latency-svc-wdxjz [1.796822525s] May 28 21:55:39.842: INFO: Got endpoints: latency-svc-wm864 [1.747751197s] May 28 21:55:39.883: INFO: Created: latency-svc-wxvjt May 28 21:55:39.911: INFO: Got endpoints: latency-svc-wxvjt [1.755463647s] May 28 21:55:39.981: INFO: Created: latency-svc-lss2d May 28 21:55:39.987: INFO: Got endpoints: latency-svc-lss2d [1.79011719s] May 28 21:55:40.014: INFO: Created: latency-svc-t6trd May 28 21:55:40.046: INFO: Got endpoints: latency-svc-t6trd [1.812250331s] May 28 21:55:40.075: INFO: Created: latency-svc-zgq88 May 28 21:55:40.123: INFO: Got endpoints: latency-svc-zgq88 [1.77032353s] May 28 21:55:40.147: INFO: Created: latency-svc-wfhr8 May 28 21:55:40.174: INFO: Got endpoints: latency-svc-wfhr8 [1.727817559s] May 28 21:55:40.212: INFO: Created: latency-svc-lh22g May 28 21:55:40.277: INFO: Got endpoints: latency-svc-lh22g [1.802611587s] May 28 21:55:40.320: INFO: Created: latency-svc-dmgml May 28 21:55:40.337: INFO: Got endpoints: latency-svc-dmgml [1.824842287s] May 28 21:55:40.387: INFO: Created: latency-svc-9x7vp May 28 21:55:40.396: INFO: Got endpoints: latency-svc-9x7vp [1.78186301s] May 28 21:55:40.416: INFO: Created: latency-svc-f94rf May 28 21:55:40.433: INFO: Got endpoints: latency-svc-f94rf [1.813433695s] May 28 21:55:40.458: INFO: Created: latency-svc-7g6zd May 28 21:55:40.531: INFO: Got endpoints: latency-svc-7g6zd [1.826089735s] May 28 21:55:40.535: INFO: Created: latency-svc-w72n4 May 28 21:55:40.555: INFO: Got endpoints: latency-svc-w72n4 [1.76565395s] May 28 21:55:40.584: INFO: Created: latency-svc-btt69 May 28 21:55:40.614: INFO: Got endpoints: latency-svc-btt69 [1.783035217s] May 28 21:55:40.614: INFO: Latencies: [89.596478ms 127.435881ms 195.32754ms 198.971079ms 253.351292ms 339.222261ms 419.755333ms 493.063784ms 632.806616ms 776.261173ms 791.694281ms 814.21823ms 834.765963ms 835.854286ms 837.515605ms 838.981284ms 839.985652ms 844.003926ms 856.181582ms 861.962661ms 867.477414ms 871.004115ms 871.088922ms 876.042025ms 876.215371ms 877.316449ms 881.475067ms 881.967664ms 882.322089ms 882.883262ms 883.334772ms 887.115864ms 887.288068ms 888.236377ms 888.620862ms 889.242053ms 891.042894ms 891.783921ms 892.128975ms 894.039412ms 895.919032ms 897.72132ms 900.259799ms 900.7074ms 903.102223ms 903.583731ms 904.77955ms 906.731466ms 907.218594ms 907.470825ms 909.039188ms 911.444612ms 911.726537ms 912.239065ms 912.858566ms 913.589622ms 913.956379ms 916.055477ms 917.828669ms 918.613814ms 919.072517ms 921.76379ms 922.951423ms 923.370412ms 923.372187ms 924.13792ms 924.281403ms 925.189001ms 926.77363ms 927.801706ms 927.817317ms 929.542487ms 931.355415ms 934.534519ms 935.830048ms 936.335742ms 938.374095ms 939.301224ms 939.823575ms 941.615901ms 943.480601ms 944.41717ms 944.68157ms 948.362875ms 949.775205ms 950.980963ms 952.454936ms 952.60264ms 952.802473ms 955.115585ms 958.125208ms 959.623987ms 960.116181ms 964.372136ms 965.13053ms 967.478841ms 971.950191ms 973.411101ms 975.892491ms 976.158572ms 980.738084ms 983.26464ms 984.60851ms 985.096193ms 988.334994ms 989.714924ms 990.615859ms 992.544538ms 993.085634ms 995.615277ms 995.828076ms 996.617326ms 1.000456306s 1.001744561s 1.002652084s 1.002845779s 1.007447258s 1.007536091s 1.008540078s 1.009608811s 1.011013442s 1.011745584s 1.012553342s 1.013288957s 1.013795207s 1.014312176s 1.024551122s 1.02563219s 1.025643181s 1.025885539s 1.026270577s 1.026417262s 1.031710184s 1.03627729s 1.037936914s 1.040445894s 1.042404131s 1.044652655s 1.047129065s 1.047485352s 1.047764413s 1.048104092s 1.053657678s 1.056058823s 1.057318016s 1.058937569s 1.061057697s 1.062221583s 1.066202133s 1.072231938s 1.080317533s 1.080757151s 1.081029186s 1.083352198s 1.083826064s 1.084999159s 1.08525504s 1.085494461s 1.088377708s 1.089103289s 1.090093358s 1.098690315s 1.102131151s 1.10359078s 1.104777061s 1.10920507s 1.123765265s 1.125174364s 1.126163994s 1.130211147s 1.130723472s 1.131020982s 1.135954422s 1.140943748s 1.145900817s 1.151316826s 1.151448398s 1.161960939s 1.177827323s 1.186887724s 1.200271283s 1.202478136s 1.213609977s 1.215274591s 1.264709685s 1.267077681s 1.727817559s 1.747751197s 1.755463647s 1.76565395s 1.77032353s 1.78186301s 1.783035217s 1.79011719s 1.796822525s 1.802611587s 1.812250331s 1.813433695s 1.824842287s 1.826089735s] May 28 21:55:40.614: INFO: 50 %ile: 980.738084ms May 28 21:55:40.614: INFO: 90 %ile: 1.200271283s May 28 21:55:40.614: INFO: 99 %ile: 1.824842287s May 28 21:55:40.614: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:55:40.614: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-3079" for this suite. • [SLOW TEST:18.520 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":155,"skipped":2407,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:55:40.664: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 21:55:40.758: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 28 21:55:40.900: INFO: stderr: "" May 28 21:55:40.900: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:55:40.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4222" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":156,"skipped":2410,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:55:40.909: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 28 21:55:40.956: INFO: namespace kubectl-7956 May 28 21:55:40.956: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-7956' May 28 21:55:41.252: INFO: stderr: "" May 28 21:55:41.252: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 28 21:55:42.257: INFO: Selector matched 1 pods for map[app:agnhost] May 28 21:55:42.258: INFO: Found 0 / 1 May 28 21:55:43.437: INFO: Selector matched 1 pods for map[app:agnhost] May 28 21:55:43.437: INFO: Found 0 / 1 May 28 21:55:44.258: INFO: Selector matched 1 pods for map[app:agnhost] May 28 21:55:44.258: INFO: Found 0 / 1 May 28 21:55:45.257: INFO: Selector matched 1 pods for map[app:agnhost] May 28 21:55:45.258: INFO: Found 1 / 1 May 28 21:55:45.258: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 28 21:55:45.262: INFO: Selector matched 1 pods for map[app:agnhost] May 28 21:55:45.262: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 28 21:55:45.262: INFO: wait on agnhost-master startup in kubectl-7956 May 28 21:55:45.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-2b5p9 agnhost-master --namespace=kubectl-7956' May 28 21:55:45.375: INFO: stderr: "" May 28 21:55:45.375: INFO: stdout: "Paused\n" STEP: exposing RC May 28 21:55:45.375: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-7956' May 28 21:55:45.515: INFO: stderr: "" May 28 21:55:45.515: INFO: stdout: "service/rm2 exposed\n" May 28 21:55:45.567: INFO: Service rm2 in namespace kubectl-7956 found. STEP: exposing service May 28 21:55:47.591: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-7956' May 28 21:55:47.776: INFO: stderr: "" May 28 21:55:47.776: INFO: stdout: "service/rm3 exposed\n" May 28 21:55:47.825: INFO: Service rm3 in namespace kubectl-7956 found. May 28 21:55:49.914: INFO: Get endpoints failed (interval 2s): endpoints "rm3" not found May 28 21:55:51.936: INFO: Get endpoints failed (interval 2s): endpoints "rm3" not found May 28 21:55:53.828: INFO: Get endpoints failed (interval 2s): endpoints "rm3" not found May 28 21:55:55.902: INFO: Get endpoints failed (interval 2s): endpoints "rm3" not found [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:55:57.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7956" for this suite. • [SLOW TEST:17.012 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":157,"skipped":2441,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:55:57.921: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 28 21:55:58.110: INFO: Waiting up to 5m0s for pod "pod-758822d7-76c9-4eb7-9c91-cbae766e0e6c" in namespace "emptydir-2589" to be "success or failure" May 28 21:55:58.125: INFO: Pod "pod-758822d7-76c9-4eb7-9c91-cbae766e0e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 15.223576ms May 28 21:56:00.298: INFO: Pod "pod-758822d7-76c9-4eb7-9c91-cbae766e0e6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.188081022s May 28 21:56:02.315: INFO: Pod "pod-758822d7-76c9-4eb7-9c91-cbae766e0e6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.204659541s STEP: Saw pod success May 28 21:56:02.315: INFO: Pod "pod-758822d7-76c9-4eb7-9c91-cbae766e0e6c" satisfied condition "success or failure" May 28 21:56:02.343: INFO: Trying to get logs from node jerma-worker2 pod pod-758822d7-76c9-4eb7-9c91-cbae766e0e6c container test-container: STEP: delete the pod May 28 21:56:02.883: INFO: Waiting for pod pod-758822d7-76c9-4eb7-9c91-cbae766e0e6c to disappear May 28 21:56:02.894: INFO: Pod pod-758822d7-76c9-4eb7-9c91-cbae766e0e6c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:56:02.895: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2589" for this suite. • [SLOW TEST:5.090 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":158,"skipped":2442,"failed":0} SSSS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:56:03.011: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 28 21:56:03.927: INFO: created pod pod-service-account-defaultsa May 28 21:56:03.928: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 28 21:56:03.952: INFO: created pod pod-service-account-mountsa May 28 21:56:03.952: INFO: pod pod-service-account-mountsa service account token volume mount: true May 28 21:56:03.970: INFO: created pod pod-service-account-nomountsa May 28 21:56:03.970: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 28 21:56:04.071: INFO: created pod pod-service-account-defaultsa-mountspec May 28 21:56:04.071: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 28 21:56:04.093: INFO: created pod pod-service-account-mountsa-mountspec May 28 21:56:04.093: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 28 21:56:04.292: INFO: created pod pod-service-account-nomountsa-mountspec May 28 21:56:04.292: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 28 21:56:04.509: INFO: created pod pod-service-account-defaultsa-nomountspec May 28 21:56:04.509: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 28 21:56:04.541: INFO: created pod pod-service-account-mountsa-nomountspec May 28 21:56:04.541: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 28 21:56:04.670: INFO: created pod pod-service-account-nomountsa-nomountspec May 28 21:56:04.670: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:56:04.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-16" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":159,"skipped":2446,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:56:04.725: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 28 21:56:04.952: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5070' May 28 21:56:05.064: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 28 21:56:05.064: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created May 28 21:56:05.127: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0 May 28 21:56:05.226: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 28 21:56:05.255: INFO: scanned /root for discovery docs: May 28 21:56:05.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5070' May 28 21:56:31.409: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 28 21:56:31.409: INFO: stdout: "Created e2e-test-httpd-rc-4528022c335230ce4abe2abd0e4f769f\nScaling up e2e-test-httpd-rc-4528022c335230ce4abe2abd0e4f769f from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-4528022c335230ce4abe2abd0e4f769f up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-4528022c335230ce4abe2abd0e4f769f to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 28 21:56:31.409: INFO: stdout: "Created e2e-test-httpd-rc-4528022c335230ce4abe2abd0e4f769f\nScaling up e2e-test-httpd-rc-4528022c335230ce4abe2abd0e4f769f from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-4528022c335230ce4abe2abd0e4f769f up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-4528022c335230ce4abe2abd0e4f769f to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 28 21:56:31.409: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5070' May 28 21:56:31.507: INFO: stderr: "" May 28 21:56:31.507: INFO: stdout: "e2e-test-httpd-rc-4528022c335230ce4abe2abd0e4f769f-tf59m " May 28 21:56:31.507: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-4528022c335230ce4abe2abd0e4f769f-tf59m -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5070' May 28 21:56:31.597: INFO: stderr: "" May 28 21:56:31.597: INFO: stdout: "true" May 28 21:56:31.597: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-4528022c335230ce4abe2abd0e4f769f-tf59m -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5070' May 28 21:56:31.695: INFO: stderr: "" May 28 21:56:31.695: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 28 21:56:31.695: INFO: e2e-test-httpd-rc-4528022c335230ce4abe2abd0e4f769f-tf59m is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 28 21:56:31.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5070' May 28 21:56:31.817: INFO: stderr: "" May 28 21:56:31.817: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:56:31.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5070" for this suite. • [SLOW TEST:27.109 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":160,"skipped":2449,"failed":0} [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:56:31.834: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0528 21:56:43.767242 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 28 21:56:43.767: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:56:43.767: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-1573" for this suite. • [SLOW TEST:12.131 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":161,"skipped":2449,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:56:43.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 21:56:44.220: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8eaa4da1-e610-41aa-a95a-88ace2eb5195" in namespace "downward-api-8796" to be "success or failure" May 28 21:56:44.231: INFO: Pod "downwardapi-volume-8eaa4da1-e610-41aa-a95a-88ace2eb5195": Phase="Pending", Reason="", readiness=false. Elapsed: 10.763861ms May 28 21:56:46.235: INFO: Pod "downwardapi-volume-8eaa4da1-e610-41aa-a95a-88ace2eb5195": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01510976s May 28 21:56:48.473: INFO: Pod "downwardapi-volume-8eaa4da1-e610-41aa-a95a-88ace2eb5195": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.252447854s STEP: Saw pod success May 28 21:56:48.473: INFO: Pod "downwardapi-volume-8eaa4da1-e610-41aa-a95a-88ace2eb5195" satisfied condition "success or failure" May 28 21:56:48.492: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-8eaa4da1-e610-41aa-a95a-88ace2eb5195 container client-container: STEP: delete the pod May 28 21:56:48.530: INFO: Waiting for pod downwardapi-volume-8eaa4da1-e610-41aa-a95a-88ace2eb5195 to disappear May 28 21:56:48.534: INFO: Pod downwardapi-volume-8eaa4da1-e610-41aa-a95a-88ace2eb5195 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:56:48.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8796" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":162,"skipped":2454,"failed":0} SSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:56:48.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-8135/secret-test-d0138258-4d8c-4eeb-96df-7147656e8164 STEP: Creating a pod to test consume secrets May 28 21:56:48.732: INFO: Waiting up to 5m0s for pod "pod-configmaps-29c8e156-b592-4406-bdcf-b0323b087bb0" in namespace "secrets-8135" to be "success or failure" May 28 21:56:48.744: INFO: Pod "pod-configmaps-29c8e156-b592-4406-bdcf-b0323b087bb0": Phase="Pending", Reason="", readiness=false. Elapsed: 11.362111ms May 28 21:56:50.975: INFO: Pod "pod-configmaps-29c8e156-b592-4406-bdcf-b0323b087bb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.243001448s May 28 21:56:52.980: INFO: Pod "pod-configmaps-29c8e156-b592-4406-bdcf-b0323b087bb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247082829s May 28 21:56:54.985: INFO: Pod "pod-configmaps-29c8e156-b592-4406-bdcf-b0323b087bb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.252087824s STEP: Saw pod success May 28 21:56:54.985: INFO: Pod "pod-configmaps-29c8e156-b592-4406-bdcf-b0323b087bb0" satisfied condition "success or failure" May 28 21:56:54.988: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-29c8e156-b592-4406-bdcf-b0323b087bb0 container env-test: STEP: delete the pod May 28 21:56:55.018: INFO: Waiting for pod pod-configmaps-29c8e156-b592-4406-bdcf-b0323b087bb0 to disappear May 28 21:56:55.065: INFO: Pod pod-configmaps-29c8e156-b592-4406-bdcf-b0323b087bb0 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:56:55.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8135" for this suite. • [SLOW TEST:6.539 seconds] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets.go:31 should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2458,"failed":0} [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:56:55.080: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 28 21:56:55.160: INFO: >>> kubeConfig: /root/.kube/config May 28 21:56:58.094: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:57:07.595: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9001" for this suite. • [SLOW TEST:12.520 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":164,"skipped":2458,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:57:07.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:57:23.873: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-3773" for this suite. • [SLOW TEST:16.282 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":165,"skipped":2463,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:57:23.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0528 21:57:34.075108 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 28 21:57:34.075: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:57:34.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6206" for this suite. • [SLOW TEST:10.199 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":166,"skipped":2504,"failed":0} [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:57:34.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5531.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5531.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5531.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5531.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 28 21:57:40.230: INFO: DNS probes using dns-test-f791e498-9cae-493e-ac62-d76b2b454318 succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5531.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-5531.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5531.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-5531.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 28 21:57:46.384: INFO: File wheezy_udp@dns-test-service-3.dns-5531.svc.cluster.local from pod dns-5531/dns-test-5067bcea-98aa-4089-9cad-81ef98eaf60e contains 'foo.example.com. ' instead of 'bar.example.com.' May 28 21:57:46.387: INFO: File jessie_udp@dns-test-service-3.dns-5531.svc.cluster.local from pod dns-5531/dns-test-5067bcea-98aa-4089-9cad-81ef98eaf60e contains 'foo.example.com. ' instead of 'bar.example.com.' May 28 21:57:46.387: INFO: Lookups using dns-5531/dns-test-5067bcea-98aa-4089-9cad-81ef98eaf60e failed for: [wheezy_udp@dns-test-service-3.dns-5531.svc.cluster.local jessie_udp@dns-test-service-3.dns-5531.svc.cluster.local] May 28 21:57:51.392: INFO: File wheezy_udp@dns-test-service-3.dns-5531.svc.cluster.local from pod dns-5531/dns-test-5067bcea-98aa-4089-9cad-81ef98eaf60e contains 'foo.example.com. ' instead of 'bar.example.com.' May 28 21:57:51.402: INFO: File jessie_udp@dns-test-service-3.dns-5531.svc.cluster.local from pod dns-5531/dns-test-5067bcea-98aa-4089-9cad-81ef98eaf60e contains 'foo.example.com. ' instead of 'bar.example.com.' May 28 21:57:51.403: INFO: Lookups using dns-5531/dns-test-5067bcea-98aa-4089-9cad-81ef98eaf60e failed for: [wheezy_udp@dns-test-service-3.dns-5531.svc.cluster.local jessie_udp@dns-test-service-3.dns-5531.svc.cluster.local] May 28 21:57:56.402: INFO: File wheezy_udp@dns-test-service-3.dns-5531.svc.cluster.local from pod dns-5531/dns-test-5067bcea-98aa-4089-9cad-81ef98eaf60e contains 'foo.example.com. ' instead of 'bar.example.com.' May 28 21:57:56.405: INFO: File jessie_udp@dns-test-service-3.dns-5531.svc.cluster.local from pod dns-5531/dns-test-5067bcea-98aa-4089-9cad-81ef98eaf60e contains 'foo.example.com. ' instead of 'bar.example.com.' May 28 21:57:56.405: INFO: Lookups using dns-5531/dns-test-5067bcea-98aa-4089-9cad-81ef98eaf60e failed for: [wheezy_udp@dns-test-service-3.dns-5531.svc.cluster.local jessie_udp@dns-test-service-3.dns-5531.svc.cluster.local] May 28 21:58:01.392: INFO: File wheezy_udp@dns-test-service-3.dns-5531.svc.cluster.local from pod dns-5531/dns-test-5067bcea-98aa-4089-9cad-81ef98eaf60e contains 'foo.example.com. ' instead of 'bar.example.com.' May 28 21:58:01.395: INFO: File jessie_udp@dns-test-service-3.dns-5531.svc.cluster.local from pod dns-5531/dns-test-5067bcea-98aa-4089-9cad-81ef98eaf60e contains 'foo.example.com. ' instead of 'bar.example.com.' May 28 21:58:01.395: INFO: Lookups using dns-5531/dns-test-5067bcea-98aa-4089-9cad-81ef98eaf60e failed for: [wheezy_udp@dns-test-service-3.dns-5531.svc.cluster.local jessie_udp@dns-test-service-3.dns-5531.svc.cluster.local] May 28 21:58:06.432: INFO: File wheezy_udp@dns-test-service-3.dns-5531.svc.cluster.local from pod dns-5531/dns-test-5067bcea-98aa-4089-9cad-81ef98eaf60e contains 'foo.example.com. ' instead of 'bar.example.com.' May 28 21:58:06.435: INFO: File jessie_udp@dns-test-service-3.dns-5531.svc.cluster.local from pod dns-5531/dns-test-5067bcea-98aa-4089-9cad-81ef98eaf60e contains 'foo.example.com. ' instead of 'bar.example.com.' May 28 21:58:06.435: INFO: Lookups using dns-5531/dns-test-5067bcea-98aa-4089-9cad-81ef98eaf60e failed for: [wheezy_udp@dns-test-service-3.dns-5531.svc.cluster.local jessie_udp@dns-test-service-3.dns-5531.svc.cluster.local] May 28 21:58:11.394: INFO: DNS probes using dns-test-5067bcea-98aa-4089-9cad-81ef98eaf60e succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5531.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-5531.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-5531.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-5531.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 28 21:58:18.102: INFO: DNS probes using dns-test-b0812215-0d04-45c0-8fbd-85f9dd8bf4e0 succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:58:18.182: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5531" for this suite. • [SLOW TEST:44.116 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":167,"skipped":2504,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:58:18.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 21:58:18.661: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c8999ccf-dcba-4a3a-9c6d-bb35dee765bc" in namespace "projected-9039" to be "success or failure" May 28 21:58:18.666: INFO: Pod "downwardapi-volume-c8999ccf-dcba-4a3a-9c6d-bb35dee765bc": Phase="Pending", Reason="", readiness=false. Elapsed: 5.585401ms May 28 21:58:20.671: INFO: Pod "downwardapi-volume-c8999ccf-dcba-4a3a-9c6d-bb35dee765bc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009686512s May 28 21:58:22.675: INFO: Pod "downwardapi-volume-c8999ccf-dcba-4a3a-9c6d-bb35dee765bc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014097637s STEP: Saw pod success May 28 21:58:22.675: INFO: Pod "downwardapi-volume-c8999ccf-dcba-4a3a-9c6d-bb35dee765bc" satisfied condition "success or failure" May 28 21:58:22.678: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c8999ccf-dcba-4a3a-9c6d-bb35dee765bc container client-container: STEP: delete the pod May 28 21:58:22.710: INFO: Waiting for pod downwardapi-volume-c8999ccf-dcba-4a3a-9c6d-bb35dee765bc to disappear May 28 21:58:22.754: INFO: Pod downwardapi-volume-c8999ccf-dcba-4a3a-9c6d-bb35dee765bc no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:58:22.754: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9039" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2538,"failed":0} SSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:58:22.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 28 21:58:27.370: INFO: Successfully updated pod "pod-update-activedeadlineseconds-156d48a7-2537-42a4-830e-4354da0cce97" May 28 21:58:27.370: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-156d48a7-2537-42a4-830e-4354da0cce97" in namespace "pods-1699" to be "terminated due to deadline exceeded" May 28 21:58:27.406: INFO: Pod "pod-update-activedeadlineseconds-156d48a7-2537-42a4-830e-4354da0cce97": Phase="Running", Reason="", readiness=true. Elapsed: 36.152405ms May 28 21:58:29.410: INFO: Pod "pod-update-activedeadlineseconds-156d48a7-2537-42a4-830e-4354da0cce97": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.040417301s May 28 21:58:29.410: INFO: Pod "pod-update-activedeadlineseconds-156d48a7-2537-42a4-830e-4354da0cce97" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:58:29.410: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1699" for this suite. • [SLOW TEST:6.637 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2544,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:58:29.417: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 28 21:58:29.498: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:58:46.025: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4904" for this suite. • [SLOW TEST:16.616 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":170,"skipped":2565,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:58:46.034: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 28 21:58:46.178: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:58:46.200: INFO: Number of nodes with available pods: 0 May 28 21:58:46.200: INFO: Node jerma-worker is running more than one daemon pod May 28 21:58:47.206: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:58:47.210: INFO: Number of nodes with available pods: 0 May 28 21:58:47.210: INFO: Node jerma-worker is running more than one daemon pod May 28 21:58:48.487: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:58:48.739: INFO: Number of nodes with available pods: 0 May 28 21:58:48.739: INFO: Node jerma-worker is running more than one daemon pod May 28 21:58:49.206: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:58:49.211: INFO: Number of nodes with available pods: 0 May 28 21:58:49.211: INFO: Node jerma-worker is running more than one daemon pod May 28 21:58:50.206: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:58:50.211: INFO: Number of nodes with available pods: 0 May 28 21:58:50.211: INFO: Node jerma-worker is running more than one daemon pod May 28 21:58:51.204: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:58:51.208: INFO: Number of nodes with available pods: 2 May 28 21:58:51.208: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 28 21:58:51.299: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 28 21:58:51.318: INFO: Number of nodes with available pods: 2 May 28 21:58:51.318: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-5403, will wait for the garbage collector to delete the pods May 28 21:58:52.389: INFO: Deleting DaemonSet.extensions daemon-set took: 6.179762ms May 28 21:58:52.489: INFO: Terminating DaemonSet.extensions daemon-set pods took: 100.223334ms May 28 21:58:59.593: INFO: Number of nodes with available pods: 0 May 28 21:58:59.593: INFO: Number of running nodes: 0, number of available pods: 0 May 28 21:58:59.596: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-5403/daemonsets","resourceVersion":"19911016"},"items":null} May 28 21:58:59.599: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-5403/pods","resourceVersion":"19911016"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:58:59.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-5403" for this suite. • [SLOW TEST:13.584 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":171,"skipped":2645,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:58:59.619: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-148.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-148.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-148.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-148.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-148.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-148.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-148.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 28 21:59:05.747: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:05.751: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:05.754: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:05.758: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:05.767: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:05.770: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:05.773: INFO: Unable to read jessie_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:05.776: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:05.782: INFO: Lookups using dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local] May 28 21:59:10.788: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:10.792: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:10.796: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:10.798: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:10.806: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:10.809: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:10.812: INFO: Unable to read jessie_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:10.814: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:10.821: INFO: Lookups using dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local] May 28 21:59:15.788: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:15.792: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:15.795: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:15.798: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:15.808: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:15.811: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:15.815: INFO: Unable to read jessie_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:15.818: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:15.824: INFO: Lookups using dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local] May 28 21:59:20.787: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:20.791: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:20.795: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:20.799: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:20.810: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:20.812: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:20.815: INFO: Unable to read jessie_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:20.818: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:20.824: INFO: Lookups using dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local] May 28 21:59:25.787: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:25.791: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:25.794: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:25.796: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:25.804: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:25.807: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:25.809: INFO: Unable to read jessie_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:25.811: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:25.817: INFO: Lookups using dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local] May 28 21:59:30.787: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:30.791: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:30.794: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:30.798: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:30.807: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:30.809: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:30.812: INFO: Unable to read jessie_udp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:30.815: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local from pod dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea: the server could not find the requested resource (get pods dns-test-93171419-b289-4cbe-86bb-3be45ef52eea) May 28 21:59:30.820: INFO: Lookups using dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local wheezy_udp@dns-test-service-2.dns-148.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-148.svc.cluster.local jessie_udp@dns-test-service-2.dns-148.svc.cluster.local jessie_tcp@dns-test-service-2.dns-148.svc.cluster.local] May 28 21:59:35.874: INFO: DNS probes using dns-148/dns-test-93171419-b289-4cbe-86bb-3be45ef52eea succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:59:36.511: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-148" for this suite. • [SLOW TEST:36.917 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":172,"skipped":2683,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:59:36.536: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:59:43.694: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-501" for this suite. • [SLOW TEST:7.165 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":173,"skipped":2694,"failed":0} SSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:59:43.702: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 28 21:59:43.803: INFO: Waiting up to 5m0s for pod "var-expansion-fa75eb97-48d0-4d00-8cdb-8be827ee4fec" in namespace "var-expansion-5481" to be "success or failure" May 28 21:59:43.823: INFO: Pod "var-expansion-fa75eb97-48d0-4d00-8cdb-8be827ee4fec": Phase="Pending", Reason="", readiness=false. Elapsed: 20.043878ms May 28 21:59:45.827: INFO: Pod "var-expansion-fa75eb97-48d0-4d00-8cdb-8be827ee4fec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024135113s May 28 21:59:47.831: INFO: Pod "var-expansion-fa75eb97-48d0-4d00-8cdb-8be827ee4fec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028083761s STEP: Saw pod success May 28 21:59:47.831: INFO: Pod "var-expansion-fa75eb97-48d0-4d00-8cdb-8be827ee4fec" satisfied condition "success or failure" May 28 21:59:47.833: INFO: Trying to get logs from node jerma-worker pod var-expansion-fa75eb97-48d0-4d00-8cdb-8be827ee4fec container dapi-container: STEP: delete the pod May 28 21:59:47.987: INFO: Waiting for pod var-expansion-fa75eb97-48d0-4d00-8cdb-8be827ee4fec to disappear May 28 21:59:47.998: INFO: Pod var-expansion-fa75eb97-48d0-4d00-8cdb-8be827ee4fec no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:59:47.998: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-5481" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":174,"skipped":2702,"failed":0} SSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:59:48.044: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 28 21:59:48.119: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 21:59:59.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7377" for this suite. • [SLOW TEST:11.445 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2707,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 21:59:59.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 28 21:59:59.537: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4729' May 28 21:59:59.902: INFO: stderr: "" May 28 21:59:59.902: INFO: stdout: "pod/pause created\n" May 28 21:59:59.902: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 28 21:59:59.902: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-4729" to be "running and ready" May 28 21:59:59.926: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 24.35209ms May 28 22:00:01.931: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028531474s May 28 22:00:03.935: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.033117303s May 28 22:00:03.935: INFO: Pod "pause" satisfied condition "running and ready" May 28 22:00:03.935: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 28 22:00:03.935: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-4729' May 28 22:00:04.030: INFO: stderr: "" May 28 22:00:04.030: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 28 22:00:04.030: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4729' May 28 22:00:04.116: INFO: stderr: "" May 28 22:00:04.116: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s testing-label-value\n" STEP: removing the label testing-label of a pod May 28 22:00:04.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-4729' May 28 22:00:04.221: INFO: stderr: "" May 28 22:00:04.221: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 28 22:00:04.221: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-4729' May 28 22:00:04.317: INFO: stderr: "" May 28 22:00:04.317: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 5s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 28 22:00:04.317: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4729' May 28 22:00:04.452: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 28 22:00:04.452: INFO: stdout: "pod \"pause\" force deleted\n" May 28 22:00:04.452: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-4729' May 28 22:00:04.559: INFO: stderr: "No resources found in kubectl-4729 namespace.\n" May 28 22:00:04.559: INFO: stdout: "" May 28 22:00:04.559: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-4729 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 28 22:00:04.723: INFO: stderr: "" May 28 22:00:04.723: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:00:04.723: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4729" for this suite. • [SLOW TEST:5.307 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":176,"skipped":2710,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:00:04.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-c87bffaf-711f-4942-a899-6578fad30bff STEP: Creating configMap with name cm-test-opt-upd-04b88043-2158-4ffc-9848-3ec405c7ee24 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-c87bffaf-711f-4942-a899-6578fad30bff STEP: Updating configmap cm-test-opt-upd-04b88043-2158-4ffc-9848-3ec405c7ee24 STEP: Creating configMap with name cm-test-opt-create-385191f3-159a-4fc4-bee3-b353f26d6343 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:00:15.319: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1564" for this suite. • [SLOW TEST:10.530 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":177,"skipped":2722,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:00:15.328: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-f8c2b595-d148-4784-8674-5eecfe48f882 STEP: Creating a pod to test consume configMaps May 28 22:00:15.410: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e7988297-a633-4698-87e5-d3d446bea87f" in namespace "projected-9129" to be "success or failure" May 28 22:00:15.418: INFO: Pod "pod-projected-configmaps-e7988297-a633-4698-87e5-d3d446bea87f": Phase="Pending", Reason="", readiness=false. Elapsed: 8.536618ms May 28 22:00:17.516: INFO: Pod "pod-projected-configmaps-e7988297-a633-4698-87e5-d3d446bea87f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.106549362s May 28 22:00:19.521: INFO: Pod "pod-projected-configmaps-e7988297-a633-4698-87e5-d3d446bea87f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111068739s STEP: Saw pod success May 28 22:00:19.521: INFO: Pod "pod-projected-configmaps-e7988297-a633-4698-87e5-d3d446bea87f" satisfied condition "success or failure" May 28 22:00:19.524: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-e7988297-a633-4698-87e5-d3d446bea87f container projected-configmap-volume-test: STEP: delete the pod May 28 22:00:19.559: INFO: Waiting for pod pod-projected-configmaps-e7988297-a633-4698-87e5-d3d446bea87f to disappear May 28 22:00:19.574: INFO: Pod pod-projected-configmaps-e7988297-a633-4698-87e5-d3d446bea87f no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:00:19.574: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9129" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":178,"skipped":2767,"failed":0} SSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:00:19.582: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-zx2v STEP: Creating a pod to test atomic-volume-subpath May 28 22:00:20.023: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zx2v" in namespace "subpath-2950" to be "success or failure" May 28 22:00:20.048: INFO: Pod "pod-subpath-test-secret-zx2v": Phase="Pending", Reason="", readiness=false. Elapsed: 25.264174ms May 28 22:00:22.206: INFO: Pod "pod-subpath-test-secret-zx2v": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183251185s May 28 22:00:24.211: INFO: Pod "pod-subpath-test-secret-zx2v": Phase="Running", Reason="", readiness=true. Elapsed: 4.187834559s May 28 22:00:26.215: INFO: Pod "pod-subpath-test-secret-zx2v": Phase="Running", Reason="", readiness=true. Elapsed: 6.192445109s May 28 22:00:28.220: INFO: Pod "pod-subpath-test-secret-zx2v": Phase="Running", Reason="", readiness=true. Elapsed: 8.196967253s May 28 22:00:30.225: INFO: Pod "pod-subpath-test-secret-zx2v": Phase="Running", Reason="", readiness=true. Elapsed: 10.202239492s May 28 22:00:32.230: INFO: Pod "pod-subpath-test-secret-zx2v": Phase="Running", Reason="", readiness=true. Elapsed: 12.207071756s May 28 22:00:34.234: INFO: Pod "pod-subpath-test-secret-zx2v": Phase="Running", Reason="", readiness=true. Elapsed: 14.211493249s May 28 22:00:36.239: INFO: Pod "pod-subpath-test-secret-zx2v": Phase="Running", Reason="", readiness=true. Elapsed: 16.216247622s May 28 22:00:38.243: INFO: Pod "pod-subpath-test-secret-zx2v": Phase="Running", Reason="", readiness=true. Elapsed: 18.220325896s May 28 22:00:40.248: INFO: Pod "pod-subpath-test-secret-zx2v": Phase="Running", Reason="", readiness=true. Elapsed: 20.22489955s May 28 22:00:42.252: INFO: Pod "pod-subpath-test-secret-zx2v": Phase="Running", Reason="", readiness=true. Elapsed: 22.229052245s May 28 22:00:44.256: INFO: Pod "pod-subpath-test-secret-zx2v": Phase="Running", Reason="", readiness=true. Elapsed: 24.232959405s May 28 22:00:46.258: INFO: Pod "pod-subpath-test-secret-zx2v": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.235791338s STEP: Saw pod success May 28 22:00:46.259: INFO: Pod "pod-subpath-test-secret-zx2v" satisfied condition "success or failure" May 28 22:00:46.261: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-zx2v container test-container-subpath-secret-zx2v: STEP: delete the pod May 28 22:00:46.297: INFO: Waiting for pod pod-subpath-test-secret-zx2v to disappear May 28 22:00:46.323: INFO: Pod pod-subpath-test-secret-zx2v no longer exists STEP: Deleting pod pod-subpath-test-secret-zx2v May 28 22:00:46.323: INFO: Deleting pod "pod-subpath-test-secret-zx2v" in namespace "subpath-2950" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:00:46.326: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2950" for this suite. • [SLOW TEST:26.752 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":179,"skipped":2773,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:00:46.335: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 28 22:00:46.419: INFO: Waiting up to 5m0s for pod "pod-406e4eb4-ac39-424c-9ce6-73da6da21a24" in namespace "emptydir-2524" to be "success or failure" May 28 22:00:46.425: INFO: Pod "pod-406e4eb4-ac39-424c-9ce6-73da6da21a24": Phase="Pending", Reason="", readiness=false. Elapsed: 5.991305ms May 28 22:00:48.491: INFO: Pod "pod-406e4eb4-ac39-424c-9ce6-73da6da21a24": Phase="Pending", Reason="", readiness=false. Elapsed: 2.072547441s May 28 22:00:50.496: INFO: Pod "pod-406e4eb4-ac39-424c-9ce6-73da6da21a24": Phase="Pending", Reason="", readiness=false. Elapsed: 4.077012693s May 28 22:00:52.501: INFO: Pod "pod-406e4eb4-ac39-424c-9ce6-73da6da21a24": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.081969682s STEP: Saw pod success May 28 22:00:52.501: INFO: Pod "pod-406e4eb4-ac39-424c-9ce6-73da6da21a24" satisfied condition "success or failure" May 28 22:00:52.504: INFO: Trying to get logs from node jerma-worker pod pod-406e4eb4-ac39-424c-9ce6-73da6da21a24 container test-container: STEP: delete the pod May 28 22:00:52.522: INFO: Waiting for pod pod-406e4eb4-ac39-424c-9ce6-73da6da21a24 to disappear May 28 22:00:52.526: INFO: Pod pod-406e4eb4-ac39-424c-9ce6-73da6da21a24 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:00:52.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-2524" for this suite. • [SLOW TEST:6.198 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":180,"skipped":2793,"failed":0} SSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:00:52.533: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:00:56.645: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6714" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":181,"skipped":2797,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:00:56.657: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-e7993d3e-369c-448a-bb80-071d84ec9062 in namespace container-probe-2066 May 28 22:01:00.851: INFO: Started pod liveness-e7993d3e-369c-448a-bb80-071d84ec9062 in namespace container-probe-2066 STEP: checking the pod's current state and verifying that restartCount is present May 28 22:01:00.853: INFO: Initial restart count of pod liveness-e7993d3e-369c-448a-bb80-071d84ec9062 is 0 May 28 22:01:18.894: INFO: Restart count of pod container-probe-2066/liveness-e7993d3e-369c-448a-bb80-071d84ec9062 is now 1 (18.041269835s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:01:18.906: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2066" for this suite. • [SLOW TEST:22.277 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":182,"skipped":2818,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:01:18.934: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 22:01:20.396: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 22:01:22.407: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300080, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300080, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300080, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300080, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 22:01:25.477: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:01:25.541: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9532" for this suite. STEP: Destroying namespace "webhook-9532-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.828 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":183,"skipped":2834,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:01:25.763: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 22:01:26.890: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 22:01:28.901: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300086, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300086, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300086, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300086, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 22:01:30.919: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300086, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300086, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300086, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300086, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 22:01:33.961: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:01:34.078: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2323" for this suite. STEP: Destroying namespace "webhook-2323-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.538 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":184,"skipped":2849,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:01:34.302: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-fb3d3b6c-da08-4dff-a450-0590943332b4 in namespace container-probe-3616 May 28 22:01:38.430: INFO: Started pod busybox-fb3d3b6c-da08-4dff-a450-0590943332b4 in namespace container-probe-3616 STEP: checking the pod's current state and verifying that restartCount is present May 28 22:01:38.433: INFO: Initial restart count of pod busybox-fb3d3b6c-da08-4dff-a450-0590943332b4 is 0 May 28 22:02:32.720: INFO: Restart count of pod container-probe-3616/busybox-fb3d3b6c-da08-4dff-a450-0590943332b4 is now 1 (54.286707256s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:02:32.760: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3616" for this suite. • [SLOW TEST:58.503 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":2918,"failed":0} [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:02:32.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 22:02:33.475: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 22:02:35.687: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300153, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300153, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300153, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300153, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 22:02:38.727: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 28 22:02:38.749: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:02:38.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7684" for this suite. STEP: Destroying namespace "webhook-7684-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.170 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":186,"skipped":2918,"failed":0} SSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:02:38.975: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 28 22:02:43.676: INFO: Successfully updated pod "adopt-release-7gt2n" STEP: Checking that the Job readopts the Pod May 28 22:02:43.676: INFO: Waiting up to 15m0s for pod "adopt-release-7gt2n" in namespace "job-6342" to be "adopted" May 28 22:02:43.734: INFO: Pod "adopt-release-7gt2n": Phase="Running", Reason="", readiness=true. Elapsed: 58.364932ms May 28 22:02:45.739: INFO: Pod "adopt-release-7gt2n": Phase="Running", Reason="", readiness=true. Elapsed: 2.062932591s May 28 22:02:45.739: INFO: Pod "adopt-release-7gt2n" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 28 22:02:46.256: INFO: Successfully updated pod "adopt-release-7gt2n" STEP: Checking that the Job releases the Pod May 28 22:02:46.256: INFO: Waiting up to 15m0s for pod "adopt-release-7gt2n" in namespace "job-6342" to be "released" May 28 22:02:46.262: INFO: Pod "adopt-release-7gt2n": Phase="Running", Reason="", readiness=true. Elapsed: 6.356008ms May 28 22:02:48.280: INFO: Pod "adopt-release-7gt2n": Phase="Running", Reason="", readiness=true. Elapsed: 2.02402593s May 28 22:02:48.280: INFO: Pod "adopt-release-7gt2n" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:02:48.280: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6342" for this suite. • [SLOW TEST:9.311 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":187,"skipped":2922,"failed":0} S ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:02:48.286: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 28 22:02:48.407: INFO: Waiting up to 5m0s for pod "pod-4c5be009-b36f-442e-9bdb-a7a06ef0f06d" in namespace "emptydir-5767" to be "success or failure" May 28 22:02:48.412: INFO: Pod "pod-4c5be009-b36f-442e-9bdb-a7a06ef0f06d": Phase="Pending", Reason="", readiness=false. Elapsed: 4.264306ms May 28 22:02:50.508: INFO: Pod "pod-4c5be009-b36f-442e-9bdb-a7a06ef0f06d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.100475836s May 28 22:02:52.513: INFO: Pod "pod-4c5be009-b36f-442e-9bdb-a7a06ef0f06d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.105737786s STEP: Saw pod success May 28 22:02:52.513: INFO: Pod "pod-4c5be009-b36f-442e-9bdb-a7a06ef0f06d" satisfied condition "success or failure" May 28 22:02:52.516: INFO: Trying to get logs from node jerma-worker2 pod pod-4c5be009-b36f-442e-9bdb-a7a06ef0f06d container test-container: STEP: delete the pod May 28 22:02:52.576: INFO: Waiting for pod pod-4c5be009-b36f-442e-9bdb-a7a06ef0f06d to disappear May 28 22:02:52.621: INFO: Pod pod-4c5be009-b36f-442e-9bdb-a7a06ef0f06d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:02:52.621: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5767" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":2923,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:02:52.629: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 28 22:02:52.673: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:02:52.781: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9264" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":189,"skipped":2935,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:02:52.800: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-10c99e72-ff31-469d-a269-df8065ddaeae STEP: Creating a pod to test consume secrets May 28 22:02:52.879: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-4292601e-ebef-4b7e-b3d8-15273ef19efc" in namespace "projected-5880" to be "success or failure" May 28 22:02:52.950: INFO: Pod "pod-projected-secrets-4292601e-ebef-4b7e-b3d8-15273ef19efc": Phase="Pending", Reason="", readiness=false. Elapsed: 71.160102ms May 28 22:02:54.954: INFO: Pod "pod-projected-secrets-4292601e-ebef-4b7e-b3d8-15273ef19efc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.075173871s May 28 22:02:56.958: INFO: Pod "pod-projected-secrets-4292601e-ebef-4b7e-b3d8-15273ef19efc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.079064719s STEP: Saw pod success May 28 22:02:56.958: INFO: Pod "pod-projected-secrets-4292601e-ebef-4b7e-b3d8-15273ef19efc" satisfied condition "success or failure" May 28 22:02:56.960: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-4292601e-ebef-4b7e-b3d8-15273ef19efc container projected-secret-volume-test: STEP: delete the pod May 28 22:02:57.024: INFO: Waiting for pod pod-projected-secrets-4292601e-ebef-4b7e-b3d8-15273ef19efc to disappear May 28 22:02:57.045: INFO: Pod pod-projected-secrets-4292601e-ebef-4b7e-b3d8-15273ef19efc no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:02:57.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5880" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":190,"skipped":2950,"failed":0} SS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:02:57.053: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-1211 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-1211 I0528 22:02:57.237740 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-1211, replica count: 2 I0528 22:03:00.288091 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0528 22:03:03.288481 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 28 22:03:03.288: INFO: Creating new exec pod May 28 22:03:08.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1211 execpod7hb58 -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 28 22:03:11.203: INFO: stderr: "I0528 22:03:11.098384 2923 log.go:172] (0xc000bf4000) (0xc000fb2000) Create stream\nI0528 22:03:11.098437 2923 log.go:172] (0xc000bf4000) (0xc000fb2000) Stream added, broadcasting: 1\nI0528 22:03:11.101660 2923 log.go:172] (0xc000bf4000) Reply frame received for 1\nI0528 22:03:11.102217 2923 log.go:172] (0xc000bf4000) (0xc001036000) Create stream\nI0528 22:03:11.102258 2923 log.go:172] (0xc000bf4000) (0xc001036000) Stream added, broadcasting: 3\nI0528 22:03:11.104344 2923 log.go:172] (0xc000bf4000) Reply frame received for 3\nI0528 22:03:11.104421 2923 log.go:172] (0xc000bf4000) (0xc00039d720) Create stream\nI0528 22:03:11.104443 2923 log.go:172] (0xc000bf4000) (0xc00039d720) Stream added, broadcasting: 5\nI0528 22:03:11.105587 2923 log.go:172] (0xc000bf4000) Reply frame received for 5\nI0528 22:03:11.181642 2923 log.go:172] (0xc000bf4000) Data frame received for 5\nI0528 22:03:11.181682 2923 log.go:172] (0xc00039d720) (5) Data frame handling\nI0528 22:03:11.181705 2923 log.go:172] (0xc00039d720) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0528 22:03:11.193846 2923 log.go:172] (0xc000bf4000) Data frame received for 5\nI0528 22:03:11.193884 2923 log.go:172] (0xc00039d720) (5) Data frame handling\nI0528 22:03:11.193933 2923 log.go:172] (0xc00039d720) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0528 22:03:11.194061 2923 log.go:172] (0xc000bf4000) Data frame received for 3\nI0528 22:03:11.194092 2923 log.go:172] (0xc001036000) (3) Data frame handling\nI0528 22:03:11.194127 2923 log.go:172] (0xc000bf4000) Data frame received for 5\nI0528 22:03:11.194152 2923 log.go:172] (0xc00039d720) (5) Data frame handling\nI0528 22:03:11.196031 2923 log.go:172] (0xc000bf4000) Data frame received for 1\nI0528 22:03:11.196052 2923 log.go:172] (0xc000fb2000) (1) Data frame handling\nI0528 22:03:11.196063 2923 log.go:172] (0xc000fb2000) (1) Data frame sent\nI0528 22:03:11.196076 2923 log.go:172] (0xc000bf4000) (0xc000fb2000) Stream removed, broadcasting: 1\nI0528 22:03:11.196095 2923 log.go:172] (0xc000bf4000) Go away received\nI0528 22:03:11.196724 2923 log.go:172] (0xc000bf4000) (0xc000fb2000) Stream removed, broadcasting: 1\nI0528 22:03:11.196752 2923 log.go:172] (0xc000bf4000) (0xc001036000) Stream removed, broadcasting: 3\nI0528 22:03:11.196766 2923 log.go:172] (0xc000bf4000) (0xc00039d720) Stream removed, broadcasting: 5\n" May 28 22:03:11.203: INFO: stdout: "" May 28 22:03:11.204: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-1211 execpod7hb58 -- /bin/sh -x -c nc -zv -t -w 2 10.110.12.133 80' May 28 22:03:11.416: INFO: stderr: "I0528 22:03:11.346195 2958 log.go:172] (0xc000022580) (0xc000902000) Create stream\nI0528 22:03:11.346250 2958 log.go:172] (0xc000022580) (0xc000902000) Stream added, broadcasting: 1\nI0528 22:03:11.348964 2958 log.go:172] (0xc000022580) Reply frame received for 1\nI0528 22:03:11.349019 2958 log.go:172] (0xc000022580) (0xc0009020a0) Create stream\nI0528 22:03:11.349050 2958 log.go:172] (0xc000022580) (0xc0009020a0) Stream added, broadcasting: 3\nI0528 22:03:11.350641 2958 log.go:172] (0xc000022580) Reply frame received for 3\nI0528 22:03:11.350689 2958 log.go:172] (0xc000022580) (0xc00069dae0) Create stream\nI0528 22:03:11.350699 2958 log.go:172] (0xc000022580) (0xc00069dae0) Stream added, broadcasting: 5\nI0528 22:03:11.351463 2958 log.go:172] (0xc000022580) Reply frame received for 5\nI0528 22:03:11.409512 2958 log.go:172] (0xc000022580) Data frame received for 3\nI0528 22:03:11.409555 2958 log.go:172] (0xc0009020a0) (3) Data frame handling\nI0528 22:03:11.409581 2958 log.go:172] (0xc000022580) Data frame received for 5\nI0528 22:03:11.409613 2958 log.go:172] (0xc00069dae0) (5) Data frame handling\nI0528 22:03:11.409626 2958 log.go:172] (0xc00069dae0) (5) Data frame sent\nI0528 22:03:11.409634 2958 log.go:172] (0xc000022580) Data frame received for 5\nI0528 22:03:11.409642 2958 log.go:172] (0xc00069dae0) (5) Data frame handling\n+ nc -zv -t -w 2 10.110.12.133 80\nConnection to 10.110.12.133 80 port [tcp/http] succeeded!\nI0528 22:03:11.410738 2958 log.go:172] (0xc000022580) Data frame received for 1\nI0528 22:03:11.410764 2958 log.go:172] (0xc000902000) (1) Data frame handling\nI0528 22:03:11.410781 2958 log.go:172] (0xc000902000) (1) Data frame sent\nI0528 22:03:11.410796 2958 log.go:172] (0xc000022580) (0xc000902000) Stream removed, broadcasting: 1\nI0528 22:03:11.410809 2958 log.go:172] (0xc000022580) Go away received\nI0528 22:03:11.411149 2958 log.go:172] (0xc000022580) (0xc000902000) Stream removed, broadcasting: 1\nI0528 22:03:11.411166 2958 log.go:172] (0xc000022580) (0xc0009020a0) Stream removed, broadcasting: 3\nI0528 22:03:11.411175 2958 log.go:172] (0xc000022580) (0xc00069dae0) Stream removed, broadcasting: 5\n" May 28 22:03:11.416: INFO: stdout: "" May 28 22:03:11.416: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:03:11.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1211" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:14.401 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":191,"skipped":2952,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:03:11.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 28 22:03:11.568: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:03:18.916: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-118" for this suite. • [SLOW TEST:7.492 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":192,"skipped":2962,"failed":0} SSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:03:18.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 22:03:19.422: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 5.15294ms) May 28 22:03:19.424: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.64362ms) May 28 22:03:19.427: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.951071ms) May 28 22:03:19.430: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.708829ms) May 28 22:03:19.433: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.9734ms) May 28 22:03:19.437: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.703658ms) May 28 22:03:19.440: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.319299ms) May 28 22:03:19.444: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.309778ms) May 28 22:03:19.446: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.786684ms) May 28 22:03:19.449: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.858659ms) May 28 22:03:19.453: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.243361ms) May 28 22:03:19.472: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 19.485988ms) May 28 22:03:19.475: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.401198ms) May 28 22:03:19.478: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.908954ms) May 28 22:03:19.482: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.288362ms) May 28 22:03:19.486: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.021031ms) May 28 22:03:19.494: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 8.232103ms) May 28 22:03:19.498: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.538089ms) May 28 22:03:19.500: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.582379ms) May 28 22:03:19.503: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.431692ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:03:19.503: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-162" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":193,"skipped":2971,"failed":0} SSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:03:19.509: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 22:03:20.256: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 28 22:03:25.270: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 28 22:03:25.270: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 28 22:03:25.354: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-3562 /apis/apps/v1/namespaces/deployment-3562/deployments/test-cleanup-deployment 2a9aad27-87a4-4add-b3e6-624bc616bad9 19912605 1 2020-05-28 22:03:25 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001ea4118 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 28 22:03:25.420: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-3562 /apis/apps/v1/namespaces/deployment-3562/replicasets/test-cleanup-deployment-55ffc6b7b6 91c3a4a5-cc58-405b-aa5f-86ffa5237e09 19912612 1 2020-05-28 22:03:25 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 2a9aad27-87a4-4add-b3e6-624bc616bad9 0xc001f42e67 0xc001f42e68}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc001f42ed8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 28 22:03:25.420: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 28 22:03:25.420: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-3562 /apis/apps/v1/namespaces/deployment-3562/replicasets/test-cleanup-controller ca5c466e-7db7-4f6a-ae1a-9d03d74ee5fc 19912606 1 2020-05-28 22:03:20 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 2a9aad27-87a4-4add-b3e6-624bc616bad9 0xc001f42d87 0xc001f42d88}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc001f42df8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 28 22:03:25.480: INFO: Pod "test-cleanup-controller-78rz6" is available: &Pod{ObjectMeta:{test-cleanup-controller-78rz6 test-cleanup-controller- deployment-3562 /api/v1/namespaces/deployment-3562/pods/test-cleanup-controller-78rz6 5b24166b-5a54-4e50-969c-c3cc39fb8f3f 19912586 0 2020-05-28 22:03:20 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller ca5c466e-7db7-4f6a-ae1a-9d03d74ee5fc 0xc001f43317 0xc001f43318}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ck85p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ck85p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ck85p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:03:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:03:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:03:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:03:20 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.133,StartTime:2020-05-28 22:03:20 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-28 22:03:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://56d204486bc612e25a781d94b0f2b4ad63dc7b77d8fb5b344bacb68b89e28d30,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.133,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:03:25.480: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-dp668" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-dp668 test-cleanup-deployment-55ffc6b7b6- deployment-3562 /api/v1/namespaces/deployment-3562/pods/test-cleanup-deployment-55ffc6b7b6-dp668 81e46d64-c715-4bc6-931d-fa184f99c7d6 19912611 0 2020-05-28 22:03:25 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 91c3a4a5-cc58-405b-aa5f-86ffa5237e09 0xc001f434e7 0xc001f434e8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-ck85p,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-ck85p,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-ck85p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:03:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:03:25.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-3562" for this suite. • [SLOW TEST:6.042 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":194,"skipped":2977,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:03:25.552: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-9063 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9063 to expose endpoints map[] May 28 22:03:25.699: INFO: successfully validated that service multi-endpoint-test in namespace services-9063 exposes endpoints map[] (12.06018ms elapsed) STEP: Creating pod pod1 in namespace services-9063 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9063 to expose endpoints map[pod1:[100]] May 28 22:03:29.977: INFO: successfully validated that service multi-endpoint-test in namespace services-9063 exposes endpoints map[pod1:[100]] (4.265197387s elapsed) STEP: Creating pod pod2 in namespace services-9063 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9063 to expose endpoints map[pod1:[100] pod2:[101]] May 28 22:03:33.155: INFO: successfully validated that service multi-endpoint-test in namespace services-9063 exposes endpoints map[pod1:[100] pod2:[101]] (3.174274153s elapsed) STEP: Deleting pod pod1 in namespace services-9063 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9063 to expose endpoints map[pod2:[101]] May 28 22:03:34.269: INFO: successfully validated that service multi-endpoint-test in namespace services-9063 exposes endpoints map[pod2:[101]] (1.109299817s elapsed) STEP: Deleting pod pod2 in namespace services-9063 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-9063 to expose endpoints map[] May 28 22:03:35.287: INFO: successfully validated that service multi-endpoint-test in namespace services-9063 exposes endpoints map[] (1.012856806s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:03:35.372: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9063" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.849 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":195,"skipped":2987,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:03:35.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 22:03:35.971: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 22:03:37.978: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300215, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300215, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300216, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300215, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 22:03:41.047: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 22:03:41.051: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:03:42.278: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7153" for this suite. STEP: Destroying namespace "webhook-7153-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.999 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":196,"skipped":2992,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:03:42.400: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 22:03:42.466: INFO: Waiting up to 5m0s for pod "downwardapi-volume-27a0c51a-3dbc-42ea-bd21-394ef753cfb1" in namespace "projected-5525" to be "success or failure" May 28 22:03:42.473: INFO: Pod "downwardapi-volume-27a0c51a-3dbc-42ea-bd21-394ef753cfb1": Phase="Pending", Reason="", readiness=false. Elapsed: 7.493801ms May 28 22:03:44.484: INFO: Pod "downwardapi-volume-27a0c51a-3dbc-42ea-bd21-394ef753cfb1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018167111s May 28 22:03:46.488: INFO: Pod "downwardapi-volume-27a0c51a-3dbc-42ea-bd21-394ef753cfb1": Phase="Running", Reason="", readiness=true. Elapsed: 4.022514951s May 28 22:03:48.492: INFO: Pod "downwardapi-volume-27a0c51a-3dbc-42ea-bd21-394ef753cfb1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.026129367s STEP: Saw pod success May 28 22:03:48.492: INFO: Pod "downwardapi-volume-27a0c51a-3dbc-42ea-bd21-394ef753cfb1" satisfied condition "success or failure" May 28 22:03:48.494: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-27a0c51a-3dbc-42ea-bd21-394ef753cfb1 container client-container: STEP: delete the pod May 28 22:03:48.529: INFO: Waiting for pod downwardapi-volume-27a0c51a-3dbc-42ea-bd21-394ef753cfb1 to disappear May 28 22:03:48.534: INFO: Pod downwardapi-volume-27a0c51a-3dbc-42ea-bd21-394ef753cfb1 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:03:48.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5525" for this suite. • [SLOW TEST:6.142 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3035,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:03:48.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 28 22:03:48.620: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 28 22:03:49.445: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 28 22:03:51.887: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300229, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300229, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300229, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300229, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 22:03:53.930: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300229, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300229, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300229, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300229, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 22:03:56.524: INFO: Waited 626.637788ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:03:56.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-4475" for this suite. • [SLOW TEST:8.647 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":198,"skipped":3057,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:03:57.189: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 28 22:04:01.947: INFO: Successfully updated pod "annotationupdatea9679480-dd9c-461d-85f5-e368fc726ccb" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:04:03.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6253" for this suite. • [SLOW TEST:6.779 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":199,"skipped":3093,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:04:03.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:04:15.117: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-1891" for this suite. • [SLOW TEST:11.158 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":200,"skipped":3116,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:04:15.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 28 22:04:15.226: INFO: Waiting up to 5m0s for pod "pod-c4561e74-5b58-4694-93f0-8f2fa90d65ca" in namespace "emptydir-3215" to be "success or failure" May 28 22:04:15.247: INFO: Pod "pod-c4561e74-5b58-4694-93f0-8f2fa90d65ca": Phase="Pending", Reason="", readiness=false. Elapsed: 21.571822ms May 28 22:04:17.251: INFO: Pod "pod-c4561e74-5b58-4694-93f0-8f2fa90d65ca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025702899s May 28 22:04:19.257: INFO: Pod "pod-c4561e74-5b58-4694-93f0-8f2fa90d65ca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.031688405s STEP: Saw pod success May 28 22:04:19.257: INFO: Pod "pod-c4561e74-5b58-4694-93f0-8f2fa90d65ca" satisfied condition "success or failure" May 28 22:04:19.260: INFO: Trying to get logs from node jerma-worker2 pod pod-c4561e74-5b58-4694-93f0-8f2fa90d65ca container test-container: STEP: delete the pod May 28 22:04:19.279: INFO: Waiting for pod pod-c4561e74-5b58-4694-93f0-8f2fa90d65ca to disappear May 28 22:04:19.284: INFO: Pod pod-c4561e74-5b58-4694-93f0-8f2fa90d65ca no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:04:19.284: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3215" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":201,"skipped":3119,"failed":0} SSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:04:19.298: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 28 22:04:24.548: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 28 22:04:39.635: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:04:39.638: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5075" for this suite. • [SLOW TEST:20.350 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":202,"skipped":3123,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:04:39.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-4007/configmap-test-1994b985-aae6-4548-8654-1e81629f2a8a STEP: Creating a pod to test consume configMaps May 28 22:04:39.731: INFO: Waiting up to 5m0s for pod "pod-configmaps-ed1fcbce-10db-4e85-a325-48d7cf0a1004" in namespace "configmap-4007" to be "success or failure" May 28 22:04:39.734: INFO: Pod "pod-configmaps-ed1fcbce-10db-4e85-a325-48d7cf0a1004": Phase="Pending", Reason="", readiness=false. Elapsed: 3.112451ms May 28 22:04:41.739: INFO: Pod "pod-configmaps-ed1fcbce-10db-4e85-a325-48d7cf0a1004": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008001671s May 28 22:04:43.743: INFO: Pod "pod-configmaps-ed1fcbce-10db-4e85-a325-48d7cf0a1004": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012633298s STEP: Saw pod success May 28 22:04:43.743: INFO: Pod "pod-configmaps-ed1fcbce-10db-4e85-a325-48d7cf0a1004" satisfied condition "success or failure" May 28 22:04:43.747: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-ed1fcbce-10db-4e85-a325-48d7cf0a1004 container env-test: STEP: delete the pod May 28 22:04:43.785: INFO: Waiting for pod pod-configmaps-ed1fcbce-10db-4e85-a325-48d7cf0a1004 to disappear May 28 22:04:43.793: INFO: Pod pod-configmaps-ed1fcbce-10db-4e85-a325-48d7cf0a1004 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:04:43.793: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4007" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3156,"failed":0} SS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:04:43.802: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:05:43.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-2673" for this suite. • [SLOW TEST:60.076 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3158,"failed":0} SSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:05:43.879: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1187 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-1187 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-1187 May 28 22:05:43.988: INFO: Found 0 stateful pods, waiting for 1 May 28 22:05:53.993: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 28 22:05:53.996: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1187 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 28 22:05:54.425: INFO: stderr: "I0528 22:05:54.302229 2996 log.go:172] (0xc000b5cd10) (0xc0007ac3c0) Create stream\nI0528 22:05:54.302844 2996 log.go:172] (0xc000b5cd10) (0xc0007ac3c0) Stream added, broadcasting: 1\nI0528 22:05:54.308353 2996 log.go:172] (0xc000b5cd10) Reply frame received for 1\nI0528 22:05:54.308399 2996 log.go:172] (0xc000b5cd10) (0xc0005e9cc0) Create stream\nI0528 22:05:54.308410 2996 log.go:172] (0xc000b5cd10) (0xc0005e9cc0) Stream added, broadcasting: 3\nI0528 22:05:54.309658 2996 log.go:172] (0xc000b5cd10) Reply frame received for 3\nI0528 22:05:54.309693 2996 log.go:172] (0xc000b5cd10) (0xc0005688c0) Create stream\nI0528 22:05:54.309701 2996 log.go:172] (0xc000b5cd10) (0xc0005688c0) Stream added, broadcasting: 5\nI0528 22:05:54.310556 2996 log.go:172] (0xc000b5cd10) Reply frame received for 5\nI0528 22:05:54.377678 2996 log.go:172] (0xc000b5cd10) Data frame received for 5\nI0528 22:05:54.377714 2996 log.go:172] (0xc0005688c0) (5) Data frame handling\nI0528 22:05:54.377748 2996 log.go:172] (0xc0005688c0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0528 22:05:54.414743 2996 log.go:172] (0xc000b5cd10) Data frame received for 5\nI0528 22:05:54.414792 2996 log.go:172] (0xc0005688c0) (5) Data frame handling\nI0528 22:05:54.414828 2996 log.go:172] (0xc000b5cd10) Data frame received for 3\nI0528 22:05:54.414849 2996 log.go:172] (0xc0005e9cc0) (3) Data frame handling\nI0528 22:05:54.414908 2996 log.go:172] (0xc0005e9cc0) (3) Data frame sent\nI0528 22:05:54.414929 2996 log.go:172] (0xc000b5cd10) Data frame received for 3\nI0528 22:05:54.414945 2996 log.go:172] (0xc0005e9cc0) (3) Data frame handling\nI0528 22:05:54.416642 2996 log.go:172] (0xc000b5cd10) Data frame received for 1\nI0528 22:05:54.416663 2996 log.go:172] (0xc0007ac3c0) (1) Data frame handling\nI0528 22:05:54.416683 2996 log.go:172] (0xc0007ac3c0) (1) Data frame sent\nI0528 22:05:54.416695 2996 log.go:172] (0xc000b5cd10) (0xc0007ac3c0) Stream removed, broadcasting: 1\nI0528 22:05:54.416828 2996 log.go:172] (0xc000b5cd10) Go away received\nI0528 22:05:54.417050 2996 log.go:172] (0xc000b5cd10) (0xc0007ac3c0) Stream removed, broadcasting: 1\nI0528 22:05:54.417070 2996 log.go:172] (0xc000b5cd10) (0xc0005e9cc0) Stream removed, broadcasting: 3\nI0528 22:05:54.417079 2996 log.go:172] (0xc000b5cd10) (0xc0005688c0) Stream removed, broadcasting: 5\n" May 28 22:05:54.425: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 28 22:05:54.425: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 28 22:05:54.428: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 28 22:06:04.445: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 28 22:06:04.445: INFO: Waiting for statefulset status.replicas updated to 0 May 28 22:06:04.503: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999557s May 28 22:06:05.507: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.951647098s May 28 22:06:06.511: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.94759685s May 28 22:06:07.517: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.943212885s May 28 22:06:08.522: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.937838644s May 28 22:06:09.526: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.93252056s May 28 22:06:10.545: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.928694412s May 28 22:06:11.549: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.909749458s May 28 22:06:12.553: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.905202084s May 28 22:06:13.558: INFO: Verifying statefulset ss doesn't scale past 1 for another 901.202341ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-1187 May 28 22:06:14.562: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1187 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 28 22:06:14.788: INFO: stderr: "I0528 22:06:14.689803 3018 log.go:172] (0xc00049d760) (0xc0009a6500) Create stream\nI0528 22:06:14.689854 3018 log.go:172] (0xc00049d760) (0xc0009a6500) Stream added, broadcasting: 1\nI0528 22:06:14.693897 3018 log.go:172] (0xc00049d760) Reply frame received for 1\nI0528 22:06:14.693935 3018 log.go:172] (0xc00049d760) (0xc0009a6000) Create stream\nI0528 22:06:14.693944 3018 log.go:172] (0xc00049d760) (0xc0009a6000) Stream added, broadcasting: 3\nI0528 22:06:14.694802 3018 log.go:172] (0xc00049d760) Reply frame received for 3\nI0528 22:06:14.694851 3018 log.go:172] (0xc00049d760) (0xc0006706e0) Create stream\nI0528 22:06:14.694867 3018 log.go:172] (0xc00049d760) (0xc0006706e0) Stream added, broadcasting: 5\nI0528 22:06:14.695612 3018 log.go:172] (0xc00049d760) Reply frame received for 5\nI0528 22:06:14.783050 3018 log.go:172] (0xc00049d760) Data frame received for 5\nI0528 22:06:14.783085 3018 log.go:172] (0xc0006706e0) (5) Data frame handling\nI0528 22:06:14.783098 3018 log.go:172] (0xc0006706e0) (5) Data frame sent\nI0528 22:06:14.783105 3018 log.go:172] (0xc00049d760) Data frame received for 5\nI0528 22:06:14.783111 3018 log.go:172] (0xc0006706e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0528 22:06:14.783133 3018 log.go:172] (0xc00049d760) Data frame received for 3\nI0528 22:06:14.783140 3018 log.go:172] (0xc0009a6000) (3) Data frame handling\nI0528 22:06:14.783148 3018 log.go:172] (0xc0009a6000) (3) Data frame sent\nI0528 22:06:14.783156 3018 log.go:172] (0xc00049d760) Data frame received for 3\nI0528 22:06:14.783170 3018 log.go:172] (0xc0009a6000) (3) Data frame handling\nI0528 22:06:14.784299 3018 log.go:172] (0xc00049d760) Data frame received for 1\nI0528 22:06:14.784318 3018 log.go:172] (0xc0009a6500) (1) Data frame handling\nI0528 22:06:14.784331 3018 log.go:172] (0xc0009a6500) (1) Data frame sent\nI0528 22:06:14.784344 3018 log.go:172] (0xc00049d760) (0xc0009a6500) Stream removed, broadcasting: 1\nI0528 22:06:14.784359 3018 log.go:172] (0xc00049d760) Go away received\nI0528 22:06:14.784661 3018 log.go:172] (0xc00049d760) (0xc0009a6500) Stream removed, broadcasting: 1\nI0528 22:06:14.784686 3018 log.go:172] (0xc00049d760) (0xc0009a6000) Stream removed, broadcasting: 3\nI0528 22:06:14.784696 3018 log.go:172] (0xc00049d760) (0xc0006706e0) Stream removed, broadcasting: 5\n" May 28 22:06:14.788: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 28 22:06:14.788: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 28 22:06:14.791: INFO: Found 1 stateful pods, waiting for 3 May 28 22:06:24.795: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 28 22:06:24.795: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 28 22:06:24.795: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 28 22:06:24.800: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1187 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 28 22:06:25.023: INFO: stderr: "I0528 22:06:24.921077 3039 log.go:172] (0xc00057edc0) (0xc0005ddb80) Create stream\nI0528 22:06:24.921322 3039 log.go:172] (0xc00057edc0) (0xc0005ddb80) Stream added, broadcasting: 1\nI0528 22:06:24.924104 3039 log.go:172] (0xc00057edc0) Reply frame received for 1\nI0528 22:06:24.924142 3039 log.go:172] (0xc00057edc0) (0xc0009ce000) Create stream\nI0528 22:06:24.924154 3039 log.go:172] (0xc00057edc0) (0xc0009ce000) Stream added, broadcasting: 3\nI0528 22:06:24.925283 3039 log.go:172] (0xc00057edc0) Reply frame received for 3\nI0528 22:06:24.925329 3039 log.go:172] (0xc00057edc0) (0xc00020c000) Create stream\nI0528 22:06:24.925343 3039 log.go:172] (0xc00057edc0) (0xc00020c000) Stream added, broadcasting: 5\nI0528 22:06:24.926082 3039 log.go:172] (0xc00057edc0) Reply frame received for 5\nI0528 22:06:25.012804 3039 log.go:172] (0xc00057edc0) Data frame received for 5\nI0528 22:06:25.012856 3039 log.go:172] (0xc00020c000) (5) Data frame handling\nI0528 22:06:25.012876 3039 log.go:172] (0xc00020c000) (5) Data frame sent\nI0528 22:06:25.012889 3039 log.go:172] (0xc00057edc0) Data frame received for 5\nI0528 22:06:25.012902 3039 log.go:172] (0xc00020c000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0528 22:06:25.012960 3039 log.go:172] (0xc00057edc0) Data frame received for 3\nI0528 22:06:25.013001 3039 log.go:172] (0xc0009ce000) (3) Data frame handling\nI0528 22:06:25.013031 3039 log.go:172] (0xc0009ce000) (3) Data frame sent\nI0528 22:06:25.013044 3039 log.go:172] (0xc00057edc0) Data frame received for 3\nI0528 22:06:25.013055 3039 log.go:172] (0xc0009ce000) (3) Data frame handling\nI0528 22:06:25.015068 3039 log.go:172] (0xc00057edc0) Data frame received for 1\nI0528 22:06:25.015105 3039 log.go:172] (0xc0005ddb80) (1) Data frame handling\nI0528 22:06:25.015118 3039 log.go:172] (0xc0005ddb80) (1) Data frame sent\nI0528 22:06:25.015140 3039 log.go:172] (0xc00057edc0) (0xc0005ddb80) Stream removed, broadcasting: 1\nI0528 22:06:25.015171 3039 log.go:172] (0xc00057edc0) Go away received\nI0528 22:06:25.015835 3039 log.go:172] (0xc00057edc0) (0xc0005ddb80) Stream removed, broadcasting: 1\nI0528 22:06:25.015868 3039 log.go:172] (0xc00057edc0) (0xc0009ce000) Stream removed, broadcasting: 3\nI0528 22:06:25.015887 3039 log.go:172] (0xc00057edc0) (0xc00020c000) Stream removed, broadcasting: 5\n" May 28 22:06:25.023: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 28 22:06:25.023: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 28 22:06:25.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1187 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 28 22:06:25.262: INFO: stderr: "I0528 22:06:25.159941 3061 log.go:172] (0xc000115600) (0xc0005f7d60) Create stream\nI0528 22:06:25.159996 3061 log.go:172] (0xc000115600) (0xc0005f7d60) Stream added, broadcasting: 1\nI0528 22:06:25.162833 3061 log.go:172] (0xc000115600) Reply frame received for 1\nI0528 22:06:25.162902 3061 log.go:172] (0xc000115600) (0xc00025d4a0) Create stream\nI0528 22:06:25.162931 3061 log.go:172] (0xc000115600) (0xc00025d4a0) Stream added, broadcasting: 3\nI0528 22:06:25.164812 3061 log.go:172] (0xc000115600) Reply frame received for 3\nI0528 22:06:25.164862 3061 log.go:172] (0xc000115600) (0xc00025d540) Create stream\nI0528 22:06:25.164892 3061 log.go:172] (0xc000115600) (0xc00025d540) Stream added, broadcasting: 5\nI0528 22:06:25.166535 3061 log.go:172] (0xc000115600) Reply frame received for 5\nI0528 22:06:25.222870 3061 log.go:172] (0xc000115600) Data frame received for 5\nI0528 22:06:25.222900 3061 log.go:172] (0xc00025d540) (5) Data frame handling\nI0528 22:06:25.222932 3061 log.go:172] (0xc00025d540) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0528 22:06:25.254443 3061 log.go:172] (0xc000115600) Data frame received for 3\nI0528 22:06:25.254466 3061 log.go:172] (0xc00025d4a0) (3) Data frame handling\nI0528 22:06:25.254487 3061 log.go:172] (0xc00025d4a0) (3) Data frame sent\nI0528 22:06:25.254738 3061 log.go:172] (0xc000115600) Data frame received for 5\nI0528 22:06:25.254751 3061 log.go:172] (0xc00025d540) (5) Data frame handling\nI0528 22:06:25.254766 3061 log.go:172] (0xc000115600) Data frame received for 3\nI0528 22:06:25.254771 3061 log.go:172] (0xc00025d4a0) (3) Data frame handling\nI0528 22:06:25.256315 3061 log.go:172] (0xc000115600) Data frame received for 1\nI0528 22:06:25.256330 3061 log.go:172] (0xc0005f7d60) (1) Data frame handling\nI0528 22:06:25.256343 3061 log.go:172] (0xc0005f7d60) (1) Data frame sent\nI0528 22:06:25.256411 3061 log.go:172] (0xc000115600) (0xc0005f7d60) Stream removed, broadcasting: 1\nI0528 22:06:25.256526 3061 log.go:172] (0xc000115600) Go away received\nI0528 22:06:25.256667 3061 log.go:172] (0xc000115600) (0xc0005f7d60) Stream removed, broadcasting: 1\nI0528 22:06:25.256679 3061 log.go:172] (0xc000115600) (0xc00025d4a0) Stream removed, broadcasting: 3\nI0528 22:06:25.256685 3061 log.go:172] (0xc000115600) (0xc00025d540) Stream removed, broadcasting: 5\n" May 28 22:06:25.262: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 28 22:06:25.262: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 28 22:06:25.262: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1187 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 28 22:06:25.503: INFO: stderr: "I0528 22:06:25.382425 3083 log.go:172] (0xc000958210) (0xc000a38140) Create stream\nI0528 22:06:25.382814 3083 log.go:172] (0xc000958210) (0xc000a38140) Stream added, broadcasting: 1\nI0528 22:06:25.386367 3083 log.go:172] (0xc000958210) Reply frame received for 1\nI0528 22:06:25.386423 3083 log.go:172] (0xc000958210) (0xc000506640) Create stream\nI0528 22:06:25.386455 3083 log.go:172] (0xc000958210) (0xc000506640) Stream added, broadcasting: 3\nI0528 22:06:25.387365 3083 log.go:172] (0xc000958210) Reply frame received for 3\nI0528 22:06:25.387421 3083 log.go:172] (0xc000958210) (0xc00078f0e0) Create stream\nI0528 22:06:25.387447 3083 log.go:172] (0xc000958210) (0xc00078f0e0) Stream added, broadcasting: 5\nI0528 22:06:25.388150 3083 log.go:172] (0xc000958210) Reply frame received for 5\nI0528 22:06:25.463372 3083 log.go:172] (0xc000958210) Data frame received for 5\nI0528 22:06:25.463417 3083 log.go:172] (0xc00078f0e0) (5) Data frame handling\nI0528 22:06:25.463451 3083 log.go:172] (0xc00078f0e0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0528 22:06:25.494321 3083 log.go:172] (0xc000958210) Data frame received for 3\nI0528 22:06:25.494347 3083 log.go:172] (0xc000506640) (3) Data frame handling\nI0528 22:06:25.494490 3083 log.go:172] (0xc000506640) (3) Data frame sent\nI0528 22:06:25.494590 3083 log.go:172] (0xc000958210) Data frame received for 5\nI0528 22:06:25.494624 3083 log.go:172] (0xc00078f0e0) (5) Data frame handling\nI0528 22:06:25.494658 3083 log.go:172] (0xc000958210) Data frame received for 3\nI0528 22:06:25.494670 3083 log.go:172] (0xc000506640) (3) Data frame handling\nI0528 22:06:25.496735 3083 log.go:172] (0xc000958210) Data frame received for 1\nI0528 22:06:25.496772 3083 log.go:172] (0xc000a38140) (1) Data frame handling\nI0528 22:06:25.496800 3083 log.go:172] (0xc000a38140) (1) Data frame sent\nI0528 22:06:25.496828 3083 log.go:172] (0xc000958210) (0xc000a38140) Stream removed, broadcasting: 1\nI0528 22:06:25.496850 3083 log.go:172] (0xc000958210) Go away received\nI0528 22:06:25.497455 3083 log.go:172] (0xc000958210) (0xc000a38140) Stream removed, broadcasting: 1\nI0528 22:06:25.497483 3083 log.go:172] (0xc000958210) (0xc000506640) Stream removed, broadcasting: 3\nI0528 22:06:25.497493 3083 log.go:172] (0xc000958210) (0xc00078f0e0) Stream removed, broadcasting: 5\n" May 28 22:06:25.503: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 28 22:06:25.503: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 28 22:06:25.503: INFO: Waiting for statefulset status.replicas updated to 0 May 28 22:06:25.548: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 28 22:06:35.556: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 28 22:06:35.556: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 28 22:06:35.556: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 28 22:06:35.569: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999353s May 28 22:06:36.575: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.994102669s May 28 22:06:37.580: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.988778074s May 28 22:06:38.587: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.983273511s May 28 22:06:39.592: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.975889911s May 28 22:06:40.598: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.971347077s May 28 22:06:41.603: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.965550397s May 28 22:06:42.609: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.959891891s May 28 22:06:43.614: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.954366941s May 28 22:06:44.620: INFO: Verifying statefulset ss doesn't scale past 3 for another 948.9971ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-1187 May 28 22:06:45.626: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1187 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 28 22:06:45.847: INFO: stderr: "I0528 22:06:45.761752 3103 log.go:172] (0xc000868f20) (0xc00083a500) Create stream\nI0528 22:06:45.761836 3103 log.go:172] (0xc000868f20) (0xc00083a500) Stream added, broadcasting: 1\nI0528 22:06:45.767087 3103 log.go:172] (0xc000868f20) Reply frame received for 1\nI0528 22:06:45.767144 3103 log.go:172] (0xc000868f20) (0xc00083a000) Create stream\nI0528 22:06:45.767164 3103 log.go:172] (0xc000868f20) (0xc00083a000) Stream added, broadcasting: 3\nI0528 22:06:45.768212 3103 log.go:172] (0xc000868f20) Reply frame received for 3\nI0528 22:06:45.768265 3103 log.go:172] (0xc000868f20) (0xc00061c6e0) Create stream\nI0528 22:06:45.768285 3103 log.go:172] (0xc000868f20) (0xc00061c6e0) Stream added, broadcasting: 5\nI0528 22:06:45.769347 3103 log.go:172] (0xc000868f20) Reply frame received for 5\nI0528 22:06:45.840362 3103 log.go:172] (0xc000868f20) Data frame received for 5\nI0528 22:06:45.840403 3103 log.go:172] (0xc00061c6e0) (5) Data frame handling\nI0528 22:06:45.840431 3103 log.go:172] (0xc000868f20) Data frame received for 3\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0528 22:06:45.840456 3103 log.go:172] (0xc00083a000) (3) Data frame handling\nI0528 22:06:45.840469 3103 log.go:172] (0xc00083a000) (3) Data frame sent\nI0528 22:06:45.840479 3103 log.go:172] (0xc000868f20) Data frame received for 3\nI0528 22:06:45.840486 3103 log.go:172] (0xc00083a000) (3) Data frame handling\nI0528 22:06:45.840515 3103 log.go:172] (0xc00061c6e0) (5) Data frame sent\nI0528 22:06:45.840538 3103 log.go:172] (0xc000868f20) Data frame received for 5\nI0528 22:06:45.840548 3103 log.go:172] (0xc00061c6e0) (5) Data frame handling\nI0528 22:06:45.842076 3103 log.go:172] (0xc000868f20) Data frame received for 1\nI0528 22:06:45.842096 3103 log.go:172] (0xc00083a500) (1) Data frame handling\nI0528 22:06:45.842119 3103 log.go:172] (0xc00083a500) (1) Data frame sent\nI0528 22:06:45.842133 3103 log.go:172] (0xc000868f20) (0xc00083a500) Stream removed, broadcasting: 1\nI0528 22:06:45.842157 3103 log.go:172] (0xc000868f20) Go away received\nI0528 22:06:45.842465 3103 log.go:172] (0xc000868f20) (0xc00083a500) Stream removed, broadcasting: 1\nI0528 22:06:45.842485 3103 log.go:172] (0xc000868f20) (0xc00083a000) Stream removed, broadcasting: 3\nI0528 22:06:45.842493 3103 log.go:172] (0xc000868f20) (0xc00061c6e0) Stream removed, broadcasting: 5\n" May 28 22:06:45.847: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 28 22:06:45.847: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 28 22:06:45.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1187 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 28 22:06:46.051: INFO: stderr: "I0528 22:06:45.961683 3123 log.go:172] (0xc000a86630) (0xc0009c6000) Create stream\nI0528 22:06:45.961731 3123 log.go:172] (0xc000a86630) (0xc0009c6000) Stream added, broadcasting: 1\nI0528 22:06:45.964079 3123 log.go:172] (0xc000a86630) Reply frame received for 1\nI0528 22:06:45.964115 3123 log.go:172] (0xc000a86630) (0xc0009a6000) Create stream\nI0528 22:06:45.964127 3123 log.go:172] (0xc000a86630) (0xc0009a6000) Stream added, broadcasting: 3\nI0528 22:06:45.965048 3123 log.go:172] (0xc000a86630) Reply frame received for 3\nI0528 22:06:45.965074 3123 log.go:172] (0xc000a86630) (0xc000701b80) Create stream\nI0528 22:06:45.965083 3123 log.go:172] (0xc000a86630) (0xc000701b80) Stream added, broadcasting: 5\nI0528 22:06:45.966098 3123 log.go:172] (0xc000a86630) Reply frame received for 5\nI0528 22:06:46.042894 3123 log.go:172] (0xc000a86630) Data frame received for 3\nI0528 22:06:46.042939 3123 log.go:172] (0xc0009a6000) (3) Data frame handling\nI0528 22:06:46.042965 3123 log.go:172] (0xc0009a6000) (3) Data frame sent\nI0528 22:06:46.042987 3123 log.go:172] (0xc000a86630) Data frame received for 3\nI0528 22:06:46.043002 3123 log.go:172] (0xc0009a6000) (3) Data frame handling\nI0528 22:06:46.043038 3123 log.go:172] (0xc000a86630) Data frame received for 5\nI0528 22:06:46.043062 3123 log.go:172] (0xc000701b80) (5) Data frame handling\nI0528 22:06:46.043088 3123 log.go:172] (0xc000701b80) (5) Data frame sent\nI0528 22:06:46.043109 3123 log.go:172] (0xc000a86630) Data frame received for 5\nI0528 22:06:46.043149 3123 log.go:172] (0xc000701b80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0528 22:06:46.045069 3123 log.go:172] (0xc000a86630) Data frame received for 1\nI0528 22:06:46.045100 3123 log.go:172] (0xc0009c6000) (1) Data frame handling\nI0528 22:06:46.045297 3123 log.go:172] (0xc0009c6000) (1) Data frame sent\nI0528 22:06:46.045327 3123 log.go:172] (0xc000a86630) (0xc0009c6000) Stream removed, broadcasting: 1\nI0528 22:06:46.045352 3123 log.go:172] (0xc000a86630) Go away received\nI0528 22:06:46.045829 3123 log.go:172] (0xc000a86630) (0xc0009c6000) Stream removed, broadcasting: 1\nI0528 22:06:46.045875 3123 log.go:172] (0xc000a86630) (0xc0009a6000) Stream removed, broadcasting: 3\nI0528 22:06:46.045901 3123 log.go:172] (0xc000a86630) (0xc000701b80) Stream removed, broadcasting: 5\n" May 28 22:06:46.051: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 28 22:06:46.051: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 28 22:06:46.051: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-1187 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 28 22:06:46.258: INFO: stderr: "I0528 22:06:46.172317 3145 log.go:172] (0xc0000f4b00) (0xc00076f4a0) Create stream\nI0528 22:06:46.172381 3145 log.go:172] (0xc0000f4b00) (0xc00076f4a0) Stream added, broadcasting: 1\nI0528 22:06:46.175812 3145 log.go:172] (0xc0000f4b00) Reply frame received for 1\nI0528 22:06:46.175851 3145 log.go:172] (0xc0000f4b00) (0xc0006f7a40) Create stream\nI0528 22:06:46.175862 3145 log.go:172] (0xc0000f4b00) (0xc0006f7a40) Stream added, broadcasting: 3\nI0528 22:06:46.176992 3145 log.go:172] (0xc0000f4b00) Reply frame received for 3\nI0528 22:06:46.177040 3145 log.go:172] (0xc0000f4b00) (0xc000a2a000) Create stream\nI0528 22:06:46.177054 3145 log.go:172] (0xc0000f4b00) (0xc000a2a000) Stream added, broadcasting: 5\nI0528 22:06:46.178587 3145 log.go:172] (0xc0000f4b00) Reply frame received for 5\nI0528 22:06:46.250925 3145 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0528 22:06:46.250983 3145 log.go:172] (0xc000a2a000) (5) Data frame handling\nI0528 22:06:46.251004 3145 log.go:172] (0xc000a2a000) (5) Data frame sent\nI0528 22:06:46.251019 3145 log.go:172] (0xc0000f4b00) Data frame received for 5\nI0528 22:06:46.251030 3145 log.go:172] (0xc000a2a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0528 22:06:46.251081 3145 log.go:172] (0xc0000f4b00) Data frame received for 3\nI0528 22:06:46.251111 3145 log.go:172] (0xc0006f7a40) (3) Data frame handling\nI0528 22:06:46.251159 3145 log.go:172] (0xc0006f7a40) (3) Data frame sent\nI0528 22:06:46.251188 3145 log.go:172] (0xc0000f4b00) Data frame received for 3\nI0528 22:06:46.251279 3145 log.go:172] (0xc0006f7a40) (3) Data frame handling\nI0528 22:06:46.252120 3145 log.go:172] (0xc0000f4b00) Data frame received for 1\nI0528 22:06:46.252143 3145 log.go:172] (0xc00076f4a0) (1) Data frame handling\nI0528 22:06:46.252156 3145 log.go:172] (0xc00076f4a0) (1) Data frame sent\nI0528 22:06:46.252321 3145 log.go:172] (0xc0000f4b00) (0xc00076f4a0) Stream removed, broadcasting: 1\nI0528 22:06:46.252409 3145 log.go:172] (0xc0000f4b00) Go away received\nI0528 22:06:46.252841 3145 log.go:172] (0xc0000f4b00) (0xc00076f4a0) Stream removed, broadcasting: 1\nI0528 22:06:46.252869 3145 log.go:172] (0xc0000f4b00) (0xc0006f7a40) Stream removed, broadcasting: 3\nI0528 22:06:46.252881 3145 log.go:172] (0xc0000f4b00) (0xc000a2a000) Stream removed, broadcasting: 5\n" May 28 22:06:46.258: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 28 22:06:46.258: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 28 22:06:46.258: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 28 22:07:06.289: INFO: Deleting all statefulset in ns statefulset-1187 May 28 22:07:06.292: INFO: Scaling statefulset ss to 0 May 28 22:07:06.301: INFO: Waiting for statefulset status.replicas updated to 0 May 28 22:07:06.304: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:07:06.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1187" for this suite. • [SLOW TEST:82.449 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":205,"skipped":3166,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:07:06.329: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:07:10.451: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5631" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3215,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:07:10.460: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 28 22:07:10.837: INFO: Pod name pod-release: Found 0 pods out of 1 May 28 22:07:15.842: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:07:15.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-9229" for this suite. • [SLOW TEST:5.509 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":207,"skipped":3234,"failed":0} SSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:07:15.969: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-8bea48e3-75ea-4273-b6af-072da01378ce STEP: Creating configMap with name cm-test-opt-upd-134009b4-7efb-4f22-9dd3-99d7a20e5b7d STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8bea48e3-75ea-4273-b6af-072da01378ce STEP: Updating configmap cm-test-opt-upd-134009b4-7efb-4f22-9dd3-99d7a20e5b7d STEP: Creating configMap with name cm-test-opt-create-8fd2819b-af3a-414e-94dc-a4e7989f2616 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:08:46.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-180" for this suite. • [SLOW TEST:90.631 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3239,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:08:46.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0528 22:09:26.956276 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 28 22:09:26.956: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:09:26.956: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3549" for this suite. • [SLOW TEST:40.363 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":209,"skipped":3317,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:09:26.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 22:09:27.006: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 28 22:09:29.916: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7052 create -f -' May 28 22:09:33.590: INFO: stderr: "" May 28 22:09:33.590: INFO: stdout: "e2e-test-crd-publish-openapi-5101-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 28 22:09:33.590: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7052 delete e2e-test-crd-publish-openapi-5101-crds test-cr' May 28 22:09:33.772: INFO: stderr: "" May 28 22:09:33.772: INFO: stdout: "e2e-test-crd-publish-openapi-5101-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 28 22:09:33.772: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7052 apply -f -' May 28 22:09:34.035: INFO: stderr: "" May 28 22:09:34.035: INFO: stdout: "e2e-test-crd-publish-openapi-5101-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 28 22:09:34.035: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-7052 delete e2e-test-crd-publish-openapi-5101-crds test-cr' May 28 22:09:34.172: INFO: stderr: "" May 28 22:09:34.172: INFO: stdout: "e2e-test-crd-publish-openapi-5101-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 28 22:09:34.172: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-5101-crds' May 28 22:09:34.968: INFO: stderr: "" May 28 22:09:34.968: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-5101-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:09:39.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-7052" for this suite. • [SLOW TEST:12.183 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":210,"skipped":3328,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:09:39.148: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 22:09:39.498: INFO: Waiting up to 5m0s for pod "downwardapi-volume-630df18d-c464-4a22-8a51-c1d100f8072a" in namespace "downward-api-3903" to be "success or failure" May 28 22:09:39.507: INFO: Pod "downwardapi-volume-630df18d-c464-4a22-8a51-c1d100f8072a": Phase="Pending", Reason="", readiness=false. Elapsed: 9.67646ms May 28 22:09:41.527: INFO: Pod "downwardapi-volume-630df18d-c464-4a22-8a51-c1d100f8072a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.029321719s May 28 22:09:43.531: INFO: Pod "downwardapi-volume-630df18d-c464-4a22-8a51-c1d100f8072a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033340358s STEP: Saw pod success May 28 22:09:43.531: INFO: Pod "downwardapi-volume-630df18d-c464-4a22-8a51-c1d100f8072a" satisfied condition "success or failure" May 28 22:09:43.533: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-630df18d-c464-4a22-8a51-c1d100f8072a container client-container: STEP: delete the pod May 28 22:09:43.595: INFO: Waiting for pod downwardapi-volume-630df18d-c464-4a22-8a51-c1d100f8072a to disappear May 28 22:09:43.615: INFO: Pod downwardapi-volume-630df18d-c464-4a22-8a51-c1d100f8072a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:09:43.615: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3903" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3351,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:09:43.647: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 22:09:43.699: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 28 22:09:46.588: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5388 create -f -' May 28 22:09:50.458: INFO: stderr: "" May 28 22:09:50.458: INFO: stdout: "e2e-test-crd-publish-openapi-2664-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 28 22:09:50.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5388 delete e2e-test-crd-publish-openapi-2664-crds test-foo' May 28 22:09:50.563: INFO: stderr: "" May 28 22:09:50.563: INFO: stdout: "e2e-test-crd-publish-openapi-2664-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 28 22:09:50.563: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5388 apply -f -' May 28 22:09:50.813: INFO: stderr: "" May 28 22:09:50.813: INFO: stdout: "e2e-test-crd-publish-openapi-2664-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 28 22:09:50.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5388 delete e2e-test-crd-publish-openapi-2664-crds test-foo' May 28 22:09:50.933: INFO: stderr: "" May 28 22:09:50.933: INFO: stdout: "e2e-test-crd-publish-openapi-2664-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 28 22:09:50.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5388 create -f -' May 28 22:09:51.928: INFO: rc: 1 May 28 22:09:51.928: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5388 apply -f -' May 28 22:09:52.200: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 28 22:09:52.200: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5388 create -f -' May 28 22:09:52.458: INFO: rc: 1 May 28 22:09:52.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5388 apply -f -' May 28 22:09:52.715: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 28 22:09:52.716: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2664-crds' May 28 22:09:52.973: INFO: stderr: "" May 28 22:09:52.973: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2664-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 28 22:09:52.974: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2664-crds.metadata' May 28 22:09:53.227: INFO: stderr: "" May 28 22:09:53.227: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2664-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 28 22:09:53.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2664-crds.spec' May 28 22:09:53.489: INFO: stderr: "" May 28 22:09:53.489: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2664-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 28 22:09:53.489: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2664-crds.spec.bars' May 28 22:09:53.786: INFO: stderr: "" May 28 22:09:53.786: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2664-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 28 22:09:53.786: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2664-crds.spec.bars2' May 28 22:09:54.120: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:09:56.982: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5388" for this suite. • [SLOW TEST:13.343 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":212,"skipped":3356,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:09:56.990: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 pods, got 2 pods STEP: expected 0 rs, got 1 rs STEP: Gathering metrics W0528 22:09:58.126847 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 28 22:09:58.126: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:09:58.126: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-300" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":213,"skipped":3360,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:09:58.135: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 28 22:10:04.365: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-180 PodName:pod-sharedvolume-96122f88-4c73-41ed-960c-71048da19538 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 22:10:04.365: INFO: >>> kubeConfig: /root/.kube/config I0528 22:10:04.402353 6 log.go:172] (0xc0064b82c0) (0xc00276b720) Create stream I0528 22:10:04.402388 6 log.go:172] (0xc0064b82c0) (0xc00276b720) Stream added, broadcasting: 1 I0528 22:10:04.404315 6 log.go:172] (0xc0064b82c0) Reply frame received for 1 I0528 22:10:04.404367 6 log.go:172] (0xc0064b82c0) (0xc001714000) Create stream I0528 22:10:04.404385 6 log.go:172] (0xc0064b82c0) (0xc001714000) Stream added, broadcasting: 3 I0528 22:10:04.406043 6 log.go:172] (0xc0064b82c0) Reply frame received for 3 I0528 22:10:04.406089 6 log.go:172] (0xc0064b82c0) (0xc00276b860) Create stream I0528 22:10:04.406108 6 log.go:172] (0xc0064b82c0) (0xc00276b860) Stream added, broadcasting: 5 I0528 22:10:04.407159 6 log.go:172] (0xc0064b82c0) Reply frame received for 5 I0528 22:10:04.493385 6 log.go:172] (0xc0064b82c0) Data frame received for 5 I0528 22:10:04.493425 6 log.go:172] (0xc00276b860) (5) Data frame handling I0528 22:10:04.493451 6 log.go:172] (0xc0064b82c0) Data frame received for 3 I0528 22:10:04.493466 6 log.go:172] (0xc001714000) (3) Data frame handling I0528 22:10:04.493484 6 log.go:172] (0xc001714000) (3) Data frame sent I0528 22:10:04.493498 6 log.go:172] (0xc0064b82c0) Data frame received for 3 I0528 22:10:04.493511 6 log.go:172] (0xc001714000) (3) Data frame handling I0528 22:10:04.494671 6 log.go:172] (0xc0064b82c0) Data frame received for 1 I0528 22:10:04.494697 6 log.go:172] (0xc00276b720) (1) Data frame handling I0528 22:10:04.494716 6 log.go:172] (0xc00276b720) (1) Data frame sent I0528 22:10:04.494737 6 log.go:172] (0xc0064b82c0) (0xc00276b720) Stream removed, broadcasting: 1 I0528 22:10:04.494750 6 log.go:172] (0xc0064b82c0) Go away received I0528 22:10:04.494865 6 log.go:172] (0xc0064b82c0) (0xc00276b720) Stream removed, broadcasting: 1 I0528 22:10:04.494887 6 log.go:172] (0xc0064b82c0) (0xc001714000) Stream removed, broadcasting: 3 I0528 22:10:04.494898 6 log.go:172] (0xc0064b82c0) (0xc00276b860) Stream removed, broadcasting: 5 May 28 22:10:04.494: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:10:04.494: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-180" for this suite. • [SLOW TEST:6.367 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":214,"skipped":3382,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:10:04.502: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 28 22:10:04.533: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix565642885/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:10:04.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6507" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":215,"skipped":3400,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:10:04.611: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:10:20.670: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7992" for this suite. • [SLOW TEST:16.067 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":216,"skipped":3418,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:10:20.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 22:10:21.814: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 22:10:23.823: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300621, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300621, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300621, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300621, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 22:10:25.828: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300621, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300621, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300621, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300621, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 22:10:28.861: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:10:39.027: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3988" for this suite. STEP: Destroying namespace "webhook-3988-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.491 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":217,"skipped":3429,"failed":0} SSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:10:39.169: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 28 22:10:43.379: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:10:43.472: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-3457" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":218,"skipped":3436,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:10:43.486: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 22:10:43.806: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c227a0cc-1ac8-4df8-9ebe-10a84af3ca37" in namespace "projected-1257" to be "success or failure" May 28 22:10:43.862: INFO: Pod "downwardapi-volume-c227a0cc-1ac8-4df8-9ebe-10a84af3ca37": Phase="Pending", Reason="", readiness=false. Elapsed: 56.630633ms May 28 22:10:45.867: INFO: Pod "downwardapi-volume-c227a0cc-1ac8-4df8-9ebe-10a84af3ca37": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060875049s May 28 22:10:47.871: INFO: Pod "downwardapi-volume-c227a0cc-1ac8-4df8-9ebe-10a84af3ca37": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.065219875s STEP: Saw pod success May 28 22:10:47.871: INFO: Pod "downwardapi-volume-c227a0cc-1ac8-4df8-9ebe-10a84af3ca37" satisfied condition "success or failure" May 28 22:10:47.874: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c227a0cc-1ac8-4df8-9ebe-10a84af3ca37 container client-container: STEP: delete the pod May 28 22:10:47.914: INFO: Waiting for pod downwardapi-volume-c227a0cc-1ac8-4df8-9ebe-10a84af3ca37 to disappear May 28 22:10:47.930: INFO: Pod downwardapi-volume-c227a0cc-1ac8-4df8-9ebe-10a84af3ca37 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:10:47.930: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1257" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3446,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:10:47.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-087e3d75-83dd-409e-b858-b7396946cee1 STEP: Creating secret with name s-test-opt-upd-240906e8-a2e2-4fa3-b893-10b331d9c557 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-087e3d75-83dd-409e-b858-b7396946cee1 STEP: Updating secret s-test-opt-upd-240906e8-a2e2-4fa3-b893-10b331d9c557 STEP: Creating secret with name s-test-opt-create-290375a6-0aa0-4cdf-86de-569087fc47eb STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:10:56.240: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7324" for this suite. • [SLOW TEST:8.310 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":220,"skipped":3470,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:10:56.250: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-7851af6b-70be-4a17-82cd-9b393201c54e STEP: Creating a pod to test consume secrets May 28 22:10:56.436: INFO: Waiting up to 5m0s for pod "pod-secrets-4e899e7d-9137-4549-b423-de3df62a3065" in namespace "secrets-3522" to be "success or failure" May 28 22:10:56.440: INFO: Pod "pod-secrets-4e899e7d-9137-4549-b423-de3df62a3065": Phase="Pending", Reason="", readiness=false. Elapsed: 3.6474ms May 28 22:10:58.563: INFO: Pod "pod-secrets-4e899e7d-9137-4549-b423-de3df62a3065": Phase="Pending", Reason="", readiness=false. Elapsed: 2.126532259s May 28 22:11:00.567: INFO: Pod "pod-secrets-4e899e7d-9137-4549-b423-de3df62a3065": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.130945732s STEP: Saw pod success May 28 22:11:00.567: INFO: Pod "pod-secrets-4e899e7d-9137-4549-b423-de3df62a3065" satisfied condition "success or failure" May 28 22:11:00.571: INFO: Trying to get logs from node jerma-worker pod pod-secrets-4e899e7d-9137-4549-b423-de3df62a3065 container secret-volume-test: STEP: delete the pod May 28 22:11:00.736: INFO: Waiting for pod pod-secrets-4e899e7d-9137-4549-b423-de3df62a3065 to disappear May 28 22:11:00.752: INFO: Pod pod-secrets-4e899e7d-9137-4549-b423-de3df62a3065 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:11:00.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3522" for this suite. STEP: Destroying namespace "secret-namespace-6640" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":221,"skipped":3539,"failed":0} SSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:11:00.880: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 22:11:01.000: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5258' May 28 22:11:01.341: INFO: stderr: "" May 28 22:11:01.341: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 28 22:11:01.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5258' May 28 22:11:01.728: INFO: stderr: "" May 28 22:11:01.728: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 28 22:11:02.732: INFO: Selector matched 1 pods for map[app:agnhost] May 28 22:11:02.732: INFO: Found 0 / 1 May 28 22:11:03.805: INFO: Selector matched 1 pods for map[app:agnhost] May 28 22:11:03.805: INFO: Found 0 / 1 May 28 22:11:04.733: INFO: Selector matched 1 pods for map[app:agnhost] May 28 22:11:04.733: INFO: Found 0 / 1 May 28 22:11:05.734: INFO: Selector matched 1 pods for map[app:agnhost] May 28 22:11:05.734: INFO: Found 1 / 1 May 28 22:11:05.734: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 28 22:11:05.737: INFO: Selector matched 1 pods for map[app:agnhost] May 28 22:11:05.737: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 28 22:11:05.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-9hgkq --namespace=kubectl-5258' May 28 22:11:05.845: INFO: stderr: "" May 28 22:11:05.845: INFO: stdout: "Name: agnhost-master-9hgkq\nNamespace: kubectl-5258\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Thu, 28 May 2020 22:11:01 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.121\nIPs:\n IP: 10.244.1.121\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://4707e50e5bbca2d898456de8c47e6444dc64a60e577cab757e60804ff1cc1887\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Thu, 28 May 2020 22:11:04 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-nr9mf (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-nr9mf:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-nr9mf\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-5258/agnhost-master-9hgkq to jerma-worker\n Normal Pulled 3s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker Started container agnhost-master\n" May 28 22:11:05.846: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-5258' May 28 22:11:05.967: INFO: stderr: "" May 28 22:11:05.967: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5258\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 4s replication-controller Created pod: agnhost-master-9hgkq\n" May 28 22:11:05.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-5258' May 28 22:11:06.113: INFO: stderr: "" May 28 22:11:06.113: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-5258\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.100.47.216\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.121:6379\nSession Affinity: None\nEvents: \n" May 28 22:11:06.116: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 28 22:11:06.246: INFO: stderr: "" May 28 22:11:06.246: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Thu, 28 May 2020 22:10:56 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Thu, 28 May 2020 22:07:45 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Thu, 28 May 2020 22:07:45 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Thu, 28 May 2020 22:07:45 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Thu, 28 May 2020 22:07:45 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 74d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 74d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 74d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 74d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 74d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 74d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 74d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 74d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 74d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 28 22:11:06.246: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-5258' May 28 22:11:06.361: INFO: stderr: "" May 28 22:11:06.361: INFO: stdout: "Name: kubectl-5258\nLabels: e2e-framework=kubectl\n e2e-run=6158ed5d-5c0a-4e3c-9d21-bdfbad1f01b2\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:11:06.361: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5258" for this suite. • [SLOW TEST:5.489 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":222,"skipped":3547,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:11:06.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 28 22:11:06.518: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5941 /api/v1/namespaces/watch-5941/configmaps/e2e-watch-test-label-changed ed00bde3-f935-405d-82c4-493b1ef4651c 19915345 0 2020-05-28 22:11:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 28 22:11:06.518: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5941 /api/v1/namespaces/watch-5941/configmaps/e2e-watch-test-label-changed ed00bde3-f935-405d-82c4-493b1ef4651c 19915347 0 2020-05-28 22:11:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 28 22:11:06.518: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5941 /api/v1/namespaces/watch-5941/configmaps/e2e-watch-test-label-changed ed00bde3-f935-405d-82c4-493b1ef4651c 19915349 0 2020-05-28 22:11:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 28 22:11:16.600: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5941 /api/v1/namespaces/watch-5941/configmaps/e2e-watch-test-label-changed ed00bde3-f935-405d-82c4-493b1ef4651c 19915394 0 2020-05-28 22:11:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 28 22:11:16.600: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5941 /api/v1/namespaces/watch-5941/configmaps/e2e-watch-test-label-changed ed00bde3-f935-405d-82c4-493b1ef4651c 19915395 0 2020-05-28 22:11:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 28 22:11:16.600: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-5941 /api/v1/namespaces/watch-5941/configmaps/e2e-watch-test-label-changed ed00bde3-f935-405d-82c4-493b1ef4651c 19915396 0 2020-05-28 22:11:06 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:11:16.600: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-5941" for this suite. • [SLOW TEST:10.238 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":223,"skipped":3591,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:11:16.608: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-67cc8080-808f-4071-bac0-d335ebdd7e59 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:11:16.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-185" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":224,"skipped":3608,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:11:16.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:11:32.876: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-444" for this suite. • [SLOW TEST:16.190 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":225,"skipped":3628,"failed":0} SSSSSSSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:11:32.886: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 28 22:11:32.961: INFO: Waiting up to 5m0s for pod "downward-api-21785f22-c1e9-45c0-845a-b04129c8708b" in namespace "downward-api-8316" to be "success or failure" May 28 22:11:32.967: INFO: Pod "downward-api-21785f22-c1e9-45c0-845a-b04129c8708b": Phase="Pending", Reason="", readiness=false. Elapsed: 6.385569ms May 28 22:11:34.972: INFO: Pod "downward-api-21785f22-c1e9-45c0-845a-b04129c8708b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010534385s May 28 22:11:36.976: INFO: Pod "downward-api-21785f22-c1e9-45c0-845a-b04129c8708b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.014875471s STEP: Saw pod success May 28 22:11:36.976: INFO: Pod "downward-api-21785f22-c1e9-45c0-845a-b04129c8708b" satisfied condition "success or failure" May 28 22:11:36.979: INFO: Trying to get logs from node jerma-worker pod downward-api-21785f22-c1e9-45c0-845a-b04129c8708b container dapi-container: STEP: delete the pod May 28 22:11:37.044: INFO: Waiting for pod downward-api-21785f22-c1e9-45c0-845a-b04129c8708b to disappear May 28 22:11:37.051: INFO: Pod downward-api-21785f22-c1e9-45c0-845a-b04129c8708b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:11:37.051: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8316" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":226,"skipped":3637,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:11:37.064: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 28 22:11:37.138: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3033 /api/v1/namespaces/watch-3033/configmaps/e2e-watch-test-configmap-a 2d6a86d9-8807-4549-9f29-5e7feb40af5d 19915508 0 2020-05-28 22:11:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 28 22:11:37.138: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3033 /api/v1/namespaces/watch-3033/configmaps/e2e-watch-test-configmap-a 2d6a86d9-8807-4549-9f29-5e7feb40af5d 19915508 0 2020-05-28 22:11:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 28 22:11:47.146: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3033 /api/v1/namespaces/watch-3033/configmaps/e2e-watch-test-configmap-a 2d6a86d9-8807-4549-9f29-5e7feb40af5d 19915550 0 2020-05-28 22:11:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 28 22:11:47.146: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3033 /api/v1/namespaces/watch-3033/configmaps/e2e-watch-test-configmap-a 2d6a86d9-8807-4549-9f29-5e7feb40af5d 19915550 0 2020-05-28 22:11:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 28 22:11:57.166: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3033 /api/v1/namespaces/watch-3033/configmaps/e2e-watch-test-configmap-a 2d6a86d9-8807-4549-9f29-5e7feb40af5d 19915581 0 2020-05-28 22:11:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 28 22:11:57.167: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3033 /api/v1/namespaces/watch-3033/configmaps/e2e-watch-test-configmap-a 2d6a86d9-8807-4549-9f29-5e7feb40af5d 19915581 0 2020-05-28 22:11:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 28 22:12:07.175: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3033 /api/v1/namespaces/watch-3033/configmaps/e2e-watch-test-configmap-a 2d6a86d9-8807-4549-9f29-5e7feb40af5d 19915609 0 2020-05-28 22:11:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 28 22:12:07.175: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-3033 /api/v1/namespaces/watch-3033/configmaps/e2e-watch-test-configmap-a 2d6a86d9-8807-4549-9f29-5e7feb40af5d 19915609 0 2020-05-28 22:11:37 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 28 22:12:17.182: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3033 /api/v1/namespaces/watch-3033/configmaps/e2e-watch-test-configmap-b 9995789c-6f4a-40ee-81dc-e5d5cb62ec1b 19915639 0 2020-05-28 22:12:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 28 22:12:17.182: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3033 /api/v1/namespaces/watch-3033/configmaps/e2e-watch-test-configmap-b 9995789c-6f4a-40ee-81dc-e5d5cb62ec1b 19915639 0 2020-05-28 22:12:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 28 22:12:27.189: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3033 /api/v1/namespaces/watch-3033/configmaps/e2e-watch-test-configmap-b 9995789c-6f4a-40ee-81dc-e5d5cb62ec1b 19915669 0 2020-05-28 22:12:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 28 22:12:27.190: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-3033 /api/v1/namespaces/watch-3033/configmaps/e2e-watch-test-configmap-b 9995789c-6f4a-40ee-81dc-e5d5cb62ec1b 19915669 0 2020-05-28 22:12:17 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:12:37.190: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-3033" for this suite. • [SLOW TEST:60.135 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":227,"skipped":3646,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:12:37.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:12:37.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-159" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":228,"skipped":3666,"failed":0} ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:12:37.377: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 22:13:01.522: INFO: Container started at 2020-05-28 22:12:39 +0000 UTC, pod became ready at 2020-05-28 22:13:01 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:13:01.522: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7806" for this suite. • [SLOW TEST:24.152 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":229,"skipped":3666,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:13:01.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 22:13:01.621: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:13:02.192: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-8599" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":230,"skipped":3668,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:13:02.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 28 22:13:02.315: INFO: PodSpec: initContainers in spec.initContainers May 28 22:13:53.611: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-e1f44c8e-772f-49c0-999a-d198506d8085", GenerateName:"", Namespace:"init-container-478", SelfLink:"/api/v1/namespaces/init-container-478/pods/pod-init-e1f44c8e-772f-49c0-999a-d198506d8085", UID:"db0979fd-68cf-43af-a3c2-28458b6223f6", ResourceVersion:"19916019", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726300782, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"315456819"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-fb242", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc0021c59c0), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fb242", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fb242", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-fb242", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc003d49ef8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker2", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00299a3c0), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003d49f80)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc003d49fa0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc003d49fa8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc003d49fac), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300782, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300782, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300782, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300782, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.8", PodIP:"10.244.2.155", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.2.155"}}, StartTime:(*v1.Time)(0xc0022dfe20), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0016a5c00)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc0016a5c70)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://603de37c662319062c9fea672f190672386d7bcb675c7221a2ce005164d401c8", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0022dfe80), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc0022dfe60), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc0048ee02f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:13:53.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-478" for this suite. • [SLOW TEST:51.410 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":231,"skipped":3683,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:13:53.640: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 22:13:53.719: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:13:57.851: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7080" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":232,"skipped":3705,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:13:57.859: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 28 22:13:57.918: INFO: >>> kubeConfig: /root/.kube/config May 28 22:14:00.824: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:14:11.291: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-1184" for this suite. • [SLOW TEST:13.440 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":233,"skipped":3713,"failed":0} SSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:14:11.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-4c553587-ee5d-4bb3-a180-b8e724b01cbd STEP: Creating a pod to test consume secrets May 28 22:14:11.365: INFO: Waiting up to 5m0s for pod "pod-secrets-e1610eab-0bad-4902-9238-52a1afb279e8" in namespace "secrets-342" to be "success or failure" May 28 22:14:11.368: INFO: Pod "pod-secrets-e1610eab-0bad-4902-9238-52a1afb279e8": Phase="Pending", Reason="", readiness=false. Elapsed: 3.428591ms May 28 22:14:13.415: INFO: Pod "pod-secrets-e1610eab-0bad-4902-9238-52a1afb279e8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.049796662s May 28 22:14:15.420: INFO: Pod "pod-secrets-e1610eab-0bad-4902-9238-52a1afb279e8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.054863393s May 28 22:14:17.424: INFO: Pod "pod-secrets-e1610eab-0bad-4902-9238-52a1afb279e8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.058826904s STEP: Saw pod success May 28 22:14:17.424: INFO: Pod "pod-secrets-e1610eab-0bad-4902-9238-52a1afb279e8" satisfied condition "success or failure" May 28 22:14:17.427: INFO: Trying to get logs from node jerma-worker pod pod-secrets-e1610eab-0bad-4902-9238-52a1afb279e8 container secret-volume-test: STEP: delete the pod May 28 22:14:17.473: INFO: Waiting for pod pod-secrets-e1610eab-0bad-4902-9238-52a1afb279e8 to disappear May 28 22:14:17.484: INFO: Pod pod-secrets-e1610eab-0bad-4902-9238-52a1afb279e8 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:14:17.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-342" for this suite. • [SLOW TEST:6.191 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3720,"failed":0} S ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:14:17.490: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-l9jc STEP: Creating a pod to test atomic-volume-subpath May 28 22:14:17.600: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-l9jc" in namespace "subpath-6108" to be "success or failure" May 28 22:14:17.620: INFO: Pod "pod-subpath-test-downwardapi-l9jc": Phase="Pending", Reason="", readiness=false. Elapsed: 20.072706ms May 28 22:14:19.625: INFO: Pod "pod-subpath-test-downwardapi-l9jc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024461463s May 28 22:14:21.629: INFO: Pod "pod-subpath-test-downwardapi-l9jc": Phase="Running", Reason="", readiness=true. Elapsed: 4.028681414s May 28 22:14:23.634: INFO: Pod "pod-subpath-test-downwardapi-l9jc": Phase="Running", Reason="", readiness=true. Elapsed: 6.033177013s May 28 22:14:25.640: INFO: Pod "pod-subpath-test-downwardapi-l9jc": Phase="Running", Reason="", readiness=true. Elapsed: 8.040044525s May 28 22:14:27.644: INFO: Pod "pod-subpath-test-downwardapi-l9jc": Phase="Running", Reason="", readiness=true. Elapsed: 10.043784479s May 28 22:14:29.648: INFO: Pod "pod-subpath-test-downwardapi-l9jc": Phase="Running", Reason="", readiness=true. Elapsed: 12.047889353s May 28 22:14:31.653: INFO: Pod "pod-subpath-test-downwardapi-l9jc": Phase="Running", Reason="", readiness=true. Elapsed: 14.052668366s May 28 22:14:33.657: INFO: Pod "pod-subpath-test-downwardapi-l9jc": Phase="Running", Reason="", readiness=true. Elapsed: 16.056693317s May 28 22:14:35.661: INFO: Pod "pod-subpath-test-downwardapi-l9jc": Phase="Running", Reason="", readiness=true. Elapsed: 18.060532089s May 28 22:14:37.665: INFO: Pod "pod-subpath-test-downwardapi-l9jc": Phase="Running", Reason="", readiness=true. Elapsed: 20.064510525s May 28 22:14:39.669: INFO: Pod "pod-subpath-test-downwardapi-l9jc": Phase="Running", Reason="", readiness=true. Elapsed: 22.068830431s May 28 22:14:41.674: INFO: Pod "pod-subpath-test-downwardapi-l9jc": Phase="Running", Reason="", readiness=true. Elapsed: 24.073617106s May 28 22:14:43.679: INFO: Pod "pod-subpath-test-downwardapi-l9jc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 26.078532168s STEP: Saw pod success May 28 22:14:43.679: INFO: Pod "pod-subpath-test-downwardapi-l9jc" satisfied condition "success or failure" May 28 22:14:43.683: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-l9jc container test-container-subpath-downwardapi-l9jc: STEP: delete the pod May 28 22:14:43.721: INFO: Waiting for pod pod-subpath-test-downwardapi-l9jc to disappear May 28 22:14:43.728: INFO: Pod pod-subpath-test-downwardapi-l9jc no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-l9jc May 28 22:14:43.729: INFO: Deleting pod "pod-subpath-test-downwardapi-l9jc" in namespace "subpath-6108" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:14:43.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6108" for this suite. • [SLOW TEST:26.247 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":235,"skipped":3721,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:14:43.739: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 22:14:44.621: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 22:14:46.633: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300884, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300884, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300884, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300884, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 22:14:48.652: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300884, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300884, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300884, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300884, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 22:14:51.669: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:14:51.678: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3149" for this suite. STEP: Destroying namespace "webhook-3149-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.045 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":236,"skipped":3754,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:14:51.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 28 22:14:58.435: INFO: Successfully updated pod "annotationupdateebf83b0c-d0ad-4093-bc59-18b4a4f9f5ec" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:15:02.476: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9876" for this suite. • [SLOW TEST:10.699 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3772,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:15:02.483: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 28 22:15:03.433: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 28 22:15:05.512: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300903, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300903, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300903, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300903, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 28 22:15:07.568: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300903, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300903, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300903, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726300903, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 28 22:15:10.647: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 22:15:10.650: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-7393-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:15:12.098: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2207" for this suite. STEP: Destroying namespace "webhook-2207-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.938 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":238,"skipped":3789,"failed":0} SS ------------------------------ [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:15:12.421: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 22:15:12.703: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-9e3399fe-536e-4413-a47b-50afa616288a" in namespace "security-context-test-3267" to be "success or failure" May 28 22:15:12.706: INFO: Pod "alpine-nnp-false-9e3399fe-536e-4413-a47b-50afa616288a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.173164ms May 28 22:15:14.710: INFO: Pod "alpine-nnp-false-9e3399fe-536e-4413-a47b-50afa616288a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00671871s May 28 22:15:16.780: INFO: Pod "alpine-nnp-false-9e3399fe-536e-4413-a47b-50afa616288a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.076636721s May 28 22:15:18.826: INFO: Pod "alpine-nnp-false-9e3399fe-536e-4413-a47b-50afa616288a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.122693584s May 28 22:15:18.826: INFO: Pod "alpine-nnp-false-9e3399fe-536e-4413-a47b-50afa616288a" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:15:18.877: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-3267" for this suite. • [SLOW TEST:6.497 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3791,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:15:18.918: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-7fhb STEP: Creating a pod to test atomic-volume-subpath May 28 22:15:19.753: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-7fhb" in namespace "subpath-9545" to be "success or failure" May 28 22:15:19.787: INFO: Pod "pod-subpath-test-projected-7fhb": Phase="Pending", Reason="", readiness=false. Elapsed: 33.969082ms May 28 22:15:21.806: INFO: Pod "pod-subpath-test-projected-7fhb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.052452465s May 28 22:15:23.825: INFO: Pod "pod-subpath-test-projected-7fhb": Phase="Running", Reason="", readiness=true. Elapsed: 4.07175569s May 28 22:15:26.124: INFO: Pod "pod-subpath-test-projected-7fhb": Phase="Running", Reason="", readiness=true. Elapsed: 6.371194018s May 28 22:15:28.131: INFO: Pod "pod-subpath-test-projected-7fhb": Phase="Running", Reason="", readiness=true. Elapsed: 8.377706134s May 28 22:15:30.136: INFO: Pod "pod-subpath-test-projected-7fhb": Phase="Running", Reason="", readiness=true. Elapsed: 10.382425204s May 28 22:15:32.139: INFO: Pod "pod-subpath-test-projected-7fhb": Phase="Running", Reason="", readiness=true. Elapsed: 12.386217645s May 28 22:15:34.144: INFO: Pod "pod-subpath-test-projected-7fhb": Phase="Running", Reason="", readiness=true. Elapsed: 14.390498429s May 28 22:15:36.151: INFO: Pod "pod-subpath-test-projected-7fhb": Phase="Running", Reason="", readiness=true. Elapsed: 16.397990146s May 28 22:15:38.155: INFO: Pod "pod-subpath-test-projected-7fhb": Phase="Running", Reason="", readiness=true. Elapsed: 18.40214612s May 28 22:15:40.159: INFO: Pod "pod-subpath-test-projected-7fhb": Phase="Running", Reason="", readiness=true. Elapsed: 20.405989508s May 28 22:15:42.163: INFO: Pod "pod-subpath-test-projected-7fhb": Phase="Running", Reason="", readiness=true. Elapsed: 22.40940374s May 28 22:15:44.167: INFO: Pod "pod-subpath-test-projected-7fhb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.413664755s STEP: Saw pod success May 28 22:15:44.167: INFO: Pod "pod-subpath-test-projected-7fhb" satisfied condition "success or failure" May 28 22:15:44.170: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-7fhb container test-container-subpath-projected-7fhb: STEP: delete the pod May 28 22:15:44.194: INFO: Waiting for pod pod-subpath-test-projected-7fhb to disappear May 28 22:15:44.206: INFO: Pod pod-subpath-test-projected-7fhb no longer exists STEP: Deleting pod pod-subpath-test-projected-7fhb May 28 22:15:44.206: INFO: Deleting pod "pod-subpath-test-projected-7fhb" in namespace "subpath-9545" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:15:44.228: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9545" for this suite. • [SLOW TEST:25.317 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":240,"skipped":3798,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:15:44.236: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 28 22:15:44.317: INFO: Waiting up to 5m0s for pod "pod-ca2f750f-d538-48ab-b15c-c3ca8a25fcff" in namespace "emptydir-4246" to be "success or failure" May 28 22:15:44.324: INFO: Pod "pod-ca2f750f-d538-48ab-b15c-c3ca8a25fcff": Phase="Pending", Reason="", readiness=false. Elapsed: 7.067458ms May 28 22:15:46.334: INFO: Pod "pod-ca2f750f-d538-48ab-b15c-c3ca8a25fcff": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01714877s May 28 22:15:48.339: INFO: Pod "pod-ca2f750f-d538-48ab-b15c-c3ca8a25fcff": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.021649568s STEP: Saw pod success May 28 22:15:48.339: INFO: Pod "pod-ca2f750f-d538-48ab-b15c-c3ca8a25fcff" satisfied condition "success or failure" May 28 22:15:48.342: INFO: Trying to get logs from node jerma-worker pod pod-ca2f750f-d538-48ab-b15c-c3ca8a25fcff container test-container: STEP: delete the pod May 28 22:15:48.388: INFO: Waiting for pod pod-ca2f750f-d538-48ab-b15c-c3ca8a25fcff to disappear May 28 22:15:48.391: INFO: Pod pod-ca2f750f-d538-48ab-b15c-c3ca8a25fcff no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:15:48.391: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4246" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":241,"skipped":3810,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:15:48.398: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 28 22:15:48.473: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7985' May 28 22:15:48.591: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 28 22:15:48.591: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 28 22:15:48.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-7985' May 28 22:15:48.731: INFO: stderr: "" May 28 22:15:48.731: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:15:48.731: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7985" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":242,"skipped":3810,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:15:48.750: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 28 22:16:00.916: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 28 22:16:00.932: INFO: Pod pod-with-poststart-http-hook still exists May 28 22:16:02.932: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 28 22:16:02.969: INFO: Pod pod-with-poststart-http-hook still exists May 28 22:16:04.932: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 28 22:16:04.975: INFO: Pod pod-with-poststart-http-hook still exists May 28 22:16:06.932: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 28 22:16:06.963: INFO: Pod pod-with-poststart-http-hook still exists May 28 22:16:08.932: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 28 22:16:08.935: INFO: Pod pod-with-poststart-http-hook still exists May 28 22:16:10.932: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 28 22:16:10.936: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:16:10.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-961" for this suite. • [SLOW TEST:22.195 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":243,"skipped":3816,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:16:10.946: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:16:11.071: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3160" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":244,"skipped":3857,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:16:11.087: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-7936 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-7936 May 28 22:16:11.199: INFO: Found 0 stateful pods, waiting for 1 May 28 22:16:21.204: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 28 22:16:21.251: INFO: Deleting all statefulset in ns statefulset-7936 May 28 22:16:21.260: INFO: Scaling statefulset ss to 0 May 28 22:16:41.316: INFO: Waiting for statefulset status.replicas updated to 0 May 28 22:16:41.319: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:16:41.330: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-7936" for this suite. • [SLOW TEST:30.250 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":245,"skipped":3887,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:16:41.337: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-6df3293b-259b-4079-89f8-78cc1d9ad43f STEP: Creating a pod to test consume configMaps May 28 22:16:41.444: INFO: Waiting up to 5m0s for pod "pod-configmaps-3b528a8d-efa6-4957-bf72-53b3c4993e14" in namespace "configmap-208" to be "success or failure" May 28 22:16:41.475: INFO: Pod "pod-configmaps-3b528a8d-efa6-4957-bf72-53b3c4993e14": Phase="Pending", Reason="", readiness=false. Elapsed: 30.927382ms May 28 22:16:43.479: INFO: Pod "pod-configmaps-3b528a8d-efa6-4957-bf72-53b3c4993e14": Phase="Pending", Reason="", readiness=false. Elapsed: 2.034914972s May 28 22:16:45.483: INFO: Pod "pod-configmaps-3b528a8d-efa6-4957-bf72-53b3c4993e14": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.038797808s STEP: Saw pod success May 28 22:16:45.483: INFO: Pod "pod-configmaps-3b528a8d-efa6-4957-bf72-53b3c4993e14" satisfied condition "success or failure" May 28 22:16:45.486: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-3b528a8d-efa6-4957-bf72-53b3c4993e14 container configmap-volume-test: STEP: delete the pod May 28 22:16:45.624: INFO: Waiting for pod pod-configmaps-3b528a8d-efa6-4957-bf72-53b3c4993e14 to disappear May 28 22:16:45.637: INFO: Pod pod-configmaps-3b528a8d-efa6-4957-bf72-53b3c4993e14 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:16:45.637: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-208" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":246,"skipped":3889,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:16:45.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:16:56.904: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4411" for this suite. • [SLOW TEST:11.267 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":247,"skipped":3903,"failed":0} SSSSS ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:16:56.913: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1436 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1436;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1436 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1436;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1436.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1436.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1436.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1436.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1436.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1436.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1436.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1436.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1436.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1436.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1436.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1436.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1436.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 124.132.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.132.124_udp@PTR;check="$$(dig +tcp +noall +answer +search 124.132.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.132.124_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1436 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1436;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1436 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1436;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1436.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1436.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1436.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1436.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1436.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1436.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1436.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1436.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1436.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1436.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1436.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1436.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1436.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 124.132.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.132.124_udp@PTR;check="$$(dig +tcp +noall +answer +search 124.132.106.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.106.132.124_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 28 22:17:03.225: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:03.228: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:03.230: INFO: Unable to read wheezy_udp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:03.233: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:03.235: INFO: Unable to read wheezy_udp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:03.238: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:03.242: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:03.245: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:03.286: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:03.290: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:03.292: INFO: Unable to read jessie_udp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:03.295: INFO: Unable to read jessie_tcp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:03.298: INFO: Unable to read jessie_udp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:03.300: INFO: Unable to read jessie_tcp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:03.303: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:03.306: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:03.345: INFO: Lookups using dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1436 wheezy_tcp@dns-test-service.dns-1436 wheezy_udp@dns-test-service.dns-1436.svc wheezy_tcp@dns-test-service.dns-1436.svc wheezy_udp@_http._tcp.dns-test-service.dns-1436.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1436.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1436 jessie_tcp@dns-test-service.dns-1436 jessie_udp@dns-test-service.dns-1436.svc jessie_tcp@dns-test-service.dns-1436.svc jessie_udp@_http._tcp.dns-test-service.dns-1436.svc jessie_tcp@_http._tcp.dns-test-service.dns-1436.svc] May 28 22:17:08.349: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:08.353: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:08.356: INFO: Unable to read wheezy_udp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:08.359: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:08.362: INFO: Unable to read wheezy_udp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:08.365: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:08.450: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:08.454: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:08.467: INFO: Unable to read jessie_udp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:08.485: INFO: Unable to read jessie_tcp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:08.489: INFO: Unable to read jessie_udp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:08.492: INFO: Unable to read jessie_tcp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:08.512: INFO: Lookups using dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1436 wheezy_tcp@dns-test-service.dns-1436 wheezy_udp@dns-test-service.dns-1436.svc wheezy_tcp@dns-test-service.dns-1436.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1436 jessie_tcp@dns-test-service.dns-1436 jessie_udp@dns-test-service.dns-1436.svc jessie_tcp@dns-test-service.dns-1436.svc] May 28 22:17:13.350: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:13.354: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:13.357: INFO: Unable to read wheezy_udp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:13.360: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:13.364: INFO: Unable to read wheezy_udp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:13.367: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:13.396: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:13.400: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:13.403: INFO: Unable to read jessie_udp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:13.407: INFO: Unable to read jessie_tcp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:13.410: INFO: Unable to read jessie_udp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:13.412: INFO: Unable to read jessie_tcp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:13.480: INFO: Lookups using dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1436 wheezy_tcp@dns-test-service.dns-1436 wheezy_udp@dns-test-service.dns-1436.svc wheezy_tcp@dns-test-service.dns-1436.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1436 jessie_tcp@dns-test-service.dns-1436 jessie_udp@dns-test-service.dns-1436.svc jessie_tcp@dns-test-service.dns-1436.svc] May 28 22:17:18.349: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:18.354: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:18.357: INFO: Unable to read wheezy_udp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:18.360: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:18.363: INFO: Unable to read wheezy_udp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:18.366: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:18.392: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:18.395: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:18.398: INFO: Unable to read jessie_udp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:18.402: INFO: Unable to read jessie_tcp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:18.405: INFO: Unable to read jessie_udp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:18.408: INFO: Unable to read jessie_tcp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:18.457: INFO: Lookups using dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1436 wheezy_tcp@dns-test-service.dns-1436 wheezy_udp@dns-test-service.dns-1436.svc wheezy_tcp@dns-test-service.dns-1436.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1436 jessie_tcp@dns-test-service.dns-1436 jessie_udp@dns-test-service.dns-1436.svc jessie_tcp@dns-test-service.dns-1436.svc] May 28 22:17:23.366: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:23.370: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:23.373: INFO: Unable to read wheezy_udp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:23.376: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:23.379: INFO: Unable to read wheezy_udp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:23.382: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:23.411: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:23.415: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:23.418: INFO: Unable to read jessie_udp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:23.421: INFO: Unable to read jessie_tcp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:23.424: INFO: Unable to read jessie_udp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:23.428: INFO: Unable to read jessie_tcp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:23.498: INFO: Lookups using dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1436 wheezy_tcp@dns-test-service.dns-1436 wheezy_udp@dns-test-service.dns-1436.svc wheezy_tcp@dns-test-service.dns-1436.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1436 jessie_tcp@dns-test-service.dns-1436 jessie_udp@dns-test-service.dns-1436.svc jessie_tcp@dns-test-service.dns-1436.svc] May 28 22:17:28.366: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:28.369: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:28.372: INFO: Unable to read wheezy_udp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:28.375: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:28.377: INFO: Unable to read wheezy_udp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:28.379: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:28.403: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:28.406: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:28.408: INFO: Unable to read jessie_udp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:28.411: INFO: Unable to read jessie_tcp@dns-test-service.dns-1436 from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:28.414: INFO: Unable to read jessie_udp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:28.416: INFO: Unable to read jessie_tcp@dns-test-service.dns-1436.svc from pod dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79: the server could not find the requested resource (get pods dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79) May 28 22:17:28.440: INFO: Lookups using dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79 failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1436 wheezy_tcp@dns-test-service.dns-1436 wheezy_udp@dns-test-service.dns-1436.svc wheezy_tcp@dns-test-service.dns-1436.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1436 jessie_tcp@dns-test-service.dns-1436 jessie_udp@dns-test-service.dns-1436.svc jessie_tcp@dns-test-service.dns-1436.svc] May 28 22:17:33.465: INFO: DNS probes using dns-1436/dns-test-eb1ab66a-0aa2-4d45-8896-93258a604e79 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:17:34.147: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1436" for this suite. • [SLOW TEST:37.357 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":248,"skipped":3908,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:17:34.271: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-2b4149f8-56d6-40af-9994-e59281e6c3aa STEP: Creating a pod to test consume secrets May 28 22:17:34.391: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-54278665-f641-4443-9451-a9e0dcc4605e" in namespace "projected-8549" to be "success or failure" May 28 22:17:34.401: INFO: Pod "pod-projected-secrets-54278665-f641-4443-9451-a9e0dcc4605e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.540906ms May 28 22:17:36.413: INFO: Pod "pod-projected-secrets-54278665-f641-4443-9451-a9e0dcc4605e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.021737245s May 28 22:17:38.438: INFO: Pod "pod-projected-secrets-54278665-f641-4443-9451-a9e0dcc4605e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046495935s STEP: Saw pod success May 28 22:17:38.438: INFO: Pod "pod-projected-secrets-54278665-f641-4443-9451-a9e0dcc4605e" satisfied condition "success or failure" May 28 22:17:38.440: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-54278665-f641-4443-9451-a9e0dcc4605e container projected-secret-volume-test: STEP: delete the pod May 28 22:17:38.667: INFO: Waiting for pod pod-projected-secrets-54278665-f641-4443-9451-a9e0dcc4605e to disappear May 28 22:17:38.749: INFO: Pod pod-projected-secrets-54278665-f641-4443-9451-a9e0dcc4605e no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:17:38.749: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8549" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":249,"skipped":3945,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:17:38.758: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-1f24e4a1-cc2b-46ff-96d2-73284edd0ea5 STEP: Creating a pod to test consume configMaps May 28 22:17:39.027: INFO: Waiting up to 5m0s for pod "pod-configmaps-faf54454-bc0e-4857-9500-40b606d4532d" in namespace "configmap-250" to be "success or failure" May 28 22:17:39.067: INFO: Pod "pod-configmaps-faf54454-bc0e-4857-9500-40b606d4532d": Phase="Pending", Reason="", readiness=false. Elapsed: 40.272243ms May 28 22:17:41.071: INFO: Pod "pod-configmaps-faf54454-bc0e-4857-9500-40b606d4532d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044753096s May 28 22:17:43.075: INFO: Pod "pod-configmaps-faf54454-bc0e-4857-9500-40b606d4532d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.048767023s STEP: Saw pod success May 28 22:17:43.076: INFO: Pod "pod-configmaps-faf54454-bc0e-4857-9500-40b606d4532d" satisfied condition "success or failure" May 28 22:17:43.078: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-faf54454-bc0e-4857-9500-40b606d4532d container configmap-volume-test: STEP: delete the pod May 28 22:17:43.125: INFO: Waiting for pod pod-configmaps-faf54454-bc0e-4857-9500-40b606d4532d to disappear May 28 22:17:43.132: INFO: Pod pod-configmaps-faf54454-bc0e-4857-9500-40b606d4532d no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:17:43.132: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-250" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":3954,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:17:43.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 22:17:43.254: INFO: Waiting up to 5m0s for pod "downwardapi-volume-8594143d-57d4-489a-b384-4a699c366d74" in namespace "projected-8775" to be "success or failure" May 28 22:17:43.433: INFO: Pod "downwardapi-volume-8594143d-57d4-489a-b384-4a699c366d74": Phase="Pending", Reason="", readiness=false. Elapsed: 179.506184ms May 28 22:17:45.437: INFO: Pod "downwardapi-volume-8594143d-57d4-489a-b384-4a699c366d74": Phase="Pending", Reason="", readiness=false. Elapsed: 2.183408458s May 28 22:17:47.440: INFO: Pod "downwardapi-volume-8594143d-57d4-489a-b384-4a699c366d74": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.18624013s STEP: Saw pod success May 28 22:17:47.440: INFO: Pod "downwardapi-volume-8594143d-57d4-489a-b384-4a699c366d74" satisfied condition "success or failure" May 28 22:17:47.443: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-8594143d-57d4-489a-b384-4a699c366d74 container client-container: STEP: delete the pod May 28 22:17:47.505: INFO: Waiting for pod downwardapi-volume-8594143d-57d4-489a-b384-4a699c366d74 to disappear May 28 22:17:47.514: INFO: Pod downwardapi-volume-8594143d-57d4-489a-b384-4a699c366d74 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:17:47.514: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8775" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":251,"skipped":3956,"failed":0} S ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:17:47.519: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-1845 STEP: creating a selector STEP: Creating the service pods in kubernetes May 28 22:17:47.598: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 28 22:18:13.715: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.134:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1845 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 22:18:13.715: INFO: >>> kubeConfig: /root/.kube/config I0528 22:18:13.760864 6 log.go:172] (0xc0009dc580) (0xc000c18500) Create stream I0528 22:18:13.760916 6 log.go:172] (0xc0009dc580) (0xc000c18500) Stream added, broadcasting: 1 I0528 22:18:13.763199 6 log.go:172] (0xc0009dc580) Reply frame received for 1 I0528 22:18:13.763236 6 log.go:172] (0xc0009dc580) (0xc000c186e0) Create stream I0528 22:18:13.763243 6 log.go:172] (0xc0009dc580) (0xc000c186e0) Stream added, broadcasting: 3 I0528 22:18:13.764218 6 log.go:172] (0xc0009dc580) Reply frame received for 3 I0528 22:18:13.764274 6 log.go:172] (0xc0009dc580) (0xc000c18dc0) Create stream I0528 22:18:13.764300 6 log.go:172] (0xc0009dc580) (0xc000c18dc0) Stream added, broadcasting: 5 I0528 22:18:13.765202 6 log.go:172] (0xc0009dc580) Reply frame received for 5 I0528 22:18:13.873131 6 log.go:172] (0xc0009dc580) Data frame received for 3 I0528 22:18:13.873177 6 log.go:172] (0xc000c186e0) (3) Data frame handling I0528 22:18:13.873202 6 log.go:172] (0xc000c186e0) (3) Data frame sent I0528 22:18:13.873290 6 log.go:172] (0xc0009dc580) Data frame received for 5 I0528 22:18:13.873321 6 log.go:172] (0xc000c18dc0) (5) Data frame handling I0528 22:18:13.874028 6 log.go:172] (0xc0009dc580) Data frame received for 3 I0528 22:18:13.874065 6 log.go:172] (0xc000c186e0) (3) Data frame handling I0528 22:18:13.876114 6 log.go:172] (0xc0009dc580) Data frame received for 1 I0528 22:18:13.876144 6 log.go:172] (0xc000c18500) (1) Data frame handling I0528 22:18:13.876161 6 log.go:172] (0xc000c18500) (1) Data frame sent I0528 22:18:13.876179 6 log.go:172] (0xc0009dc580) (0xc000c18500) Stream removed, broadcasting: 1 I0528 22:18:13.876201 6 log.go:172] (0xc0009dc580) Go away received I0528 22:18:13.876385 6 log.go:172] (0xc0009dc580) (0xc000c18500) Stream removed, broadcasting: 1 I0528 22:18:13.876425 6 log.go:172] (0xc0009dc580) (0xc000c186e0) Stream removed, broadcasting: 3 I0528 22:18:13.876451 6 log.go:172] (0xc0009dc580) (0xc000c18dc0) Stream removed, broadcasting: 5 May 28 22:18:13.876: INFO: Found all expected endpoints: [netserver-0] May 28 22:18:13.880: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.163:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-1845 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 22:18:13.880: INFO: >>> kubeConfig: /root/.kube/config I0528 22:18:13.916622 6 log.go:172] (0xc0022082c0) (0xc000abeb40) Create stream I0528 22:18:13.916651 6 log.go:172] (0xc0022082c0) (0xc000abeb40) Stream added, broadcasting: 1 I0528 22:18:13.918837 6 log.go:172] (0xc0022082c0) Reply frame received for 1 I0528 22:18:13.918893 6 log.go:172] (0xc0022082c0) (0xc000c18f00) Create stream I0528 22:18:13.918914 6 log.go:172] (0xc0022082c0) (0xc000c18f00) Stream added, broadcasting: 3 I0528 22:18:13.919897 6 log.go:172] (0xc0022082c0) Reply frame received for 3 I0528 22:18:13.919929 6 log.go:172] (0xc0022082c0) (0xc000c18fa0) Create stream I0528 22:18:13.919938 6 log.go:172] (0xc0022082c0) (0xc000c18fa0) Stream added, broadcasting: 5 I0528 22:18:13.920643 6 log.go:172] (0xc0022082c0) Reply frame received for 5 I0528 22:18:14.003586 6 log.go:172] (0xc0022082c0) Data frame received for 3 I0528 22:18:14.003610 6 log.go:172] (0xc000c18f00) (3) Data frame handling I0528 22:18:14.003624 6 log.go:172] (0xc000c18f00) (3) Data frame sent I0528 22:18:14.003629 6 log.go:172] (0xc0022082c0) Data frame received for 3 I0528 22:18:14.003635 6 log.go:172] (0xc000c18f00) (3) Data frame handling I0528 22:18:14.003673 6 log.go:172] (0xc0022082c0) Data frame received for 5 I0528 22:18:14.003686 6 log.go:172] (0xc000c18fa0) (5) Data frame handling I0528 22:18:14.006364 6 log.go:172] (0xc0022082c0) Data frame received for 1 I0528 22:18:14.006415 6 log.go:172] (0xc000abeb40) (1) Data frame handling I0528 22:18:14.006452 6 log.go:172] (0xc000abeb40) (1) Data frame sent I0528 22:18:14.006482 6 log.go:172] (0xc0022082c0) (0xc000abeb40) Stream removed, broadcasting: 1 I0528 22:18:14.006512 6 log.go:172] (0xc0022082c0) Go away received I0528 22:18:14.006586 6 log.go:172] (0xc0022082c0) (0xc000abeb40) Stream removed, broadcasting: 1 I0528 22:18:14.006604 6 log.go:172] (0xc0022082c0) (0xc000c18f00) Stream removed, broadcasting: 3 I0528 22:18:14.006639 6 log.go:172] (0xc0022082c0) (0xc000c18fa0) Stream removed, broadcasting: 5 May 28 22:18:14.006: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:18:14.006: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-1845" for this suite. • [SLOW TEST:26.494 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":252,"skipped":3957,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:18:14.014: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:18:18.104: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-6390" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":253,"skipped":3963,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:18:18.111: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7255 STEP: creating a selector STEP: Creating the service pods in kubernetes May 28 22:18:18.175: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 28 22:18:48.355: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.137:8080/dial?request=hostname&protocol=http&host=10.244.1.136&port=8080&tries=1'] Namespace:pod-network-test-7255 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 22:18:48.355: INFO: >>> kubeConfig: /root/.kube/config I0528 22:18:48.390235 6 log.go:172] (0xc000976580) (0xc000dc88c0) Create stream I0528 22:18:48.390276 6 log.go:172] (0xc000976580) (0xc000dc88c0) Stream added, broadcasting: 1 I0528 22:18:48.392282 6 log.go:172] (0xc000976580) Reply frame received for 1 I0528 22:18:48.392324 6 log.go:172] (0xc000976580) (0xc000c19040) Create stream I0528 22:18:48.392338 6 log.go:172] (0xc000976580) (0xc000c19040) Stream added, broadcasting: 3 I0528 22:18:48.393306 6 log.go:172] (0xc000976580) Reply frame received for 3 I0528 22:18:48.393346 6 log.go:172] (0xc000976580) (0xc000dc8960) Create stream I0528 22:18:48.393361 6 log.go:172] (0xc000976580) (0xc000dc8960) Stream added, broadcasting: 5 I0528 22:18:48.394357 6 log.go:172] (0xc000976580) Reply frame received for 5 I0528 22:18:48.474933 6 log.go:172] (0xc000976580) Data frame received for 3 I0528 22:18:48.474980 6 log.go:172] (0xc000c19040) (3) Data frame handling I0528 22:18:48.475028 6 log.go:172] (0xc000c19040) (3) Data frame sent I0528 22:18:48.475776 6 log.go:172] (0xc000976580) Data frame received for 5 I0528 22:18:48.475818 6 log.go:172] (0xc000dc8960) (5) Data frame handling I0528 22:18:48.476381 6 log.go:172] (0xc000976580) Data frame received for 3 I0528 22:18:48.476414 6 log.go:172] (0xc000c19040) (3) Data frame handling I0528 22:18:48.478019 6 log.go:172] (0xc000976580) Data frame received for 1 I0528 22:18:48.478055 6 log.go:172] (0xc000dc88c0) (1) Data frame handling I0528 22:18:48.478086 6 log.go:172] (0xc000dc88c0) (1) Data frame sent I0528 22:18:48.478106 6 log.go:172] (0xc000976580) (0xc000dc88c0) Stream removed, broadcasting: 1 I0528 22:18:48.478135 6 log.go:172] (0xc000976580) Go away received I0528 22:18:48.478247 6 log.go:172] (0xc000976580) (0xc000dc88c0) Stream removed, broadcasting: 1 I0528 22:18:48.478329 6 log.go:172] (0xc000976580) (0xc000c19040) Stream removed, broadcasting: 3 I0528 22:18:48.478386 6 log.go:172] (0xc000976580) (0xc000dc8960) Stream removed, broadcasting: 5 May 28 22:18:48.478: INFO: Waiting for responses: map[] May 28 22:18:48.481: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.1.137:8080/dial?request=hostname&protocol=http&host=10.244.2.165&port=8080&tries=1'] Namespace:pod-network-test-7255 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 28 22:18:48.481: INFO: >>> kubeConfig: /root/.kube/config I0528 22:18:48.512799 6 log.go:172] (0xc000976b00) (0xc000dc8be0) Create stream I0528 22:18:48.512826 6 log.go:172] (0xc000976b00) (0xc000dc8be0) Stream added, broadcasting: 1 I0528 22:18:48.515187 6 log.go:172] (0xc000976b00) Reply frame received for 1 I0528 22:18:48.515227 6 log.go:172] (0xc000976b00) (0xc000bc7220) Create stream I0528 22:18:48.515240 6 log.go:172] (0xc000976b00) (0xc000bc7220) Stream added, broadcasting: 3 I0528 22:18:48.516320 6 log.go:172] (0xc000976b00) Reply frame received for 3 I0528 22:18:48.516376 6 log.go:172] (0xc000976b00) (0xc000f706e0) Create stream I0528 22:18:48.516393 6 log.go:172] (0xc000976b00) (0xc000f706e0) Stream added, broadcasting: 5 I0528 22:18:48.517686 6 log.go:172] (0xc000976b00) Reply frame received for 5 I0528 22:18:48.590924 6 log.go:172] (0xc000976b00) Data frame received for 3 I0528 22:18:48.590960 6 log.go:172] (0xc000bc7220) (3) Data frame handling I0528 22:18:48.590985 6 log.go:172] (0xc000bc7220) (3) Data frame sent I0528 22:18:48.591230 6 log.go:172] (0xc000976b00) Data frame received for 5 I0528 22:18:48.591252 6 log.go:172] (0xc000f706e0) (5) Data frame handling I0528 22:18:48.591404 6 log.go:172] (0xc000976b00) Data frame received for 3 I0528 22:18:48.591420 6 log.go:172] (0xc000bc7220) (3) Data frame handling I0528 22:18:48.592887 6 log.go:172] (0xc000976b00) Data frame received for 1 I0528 22:18:48.592950 6 log.go:172] (0xc000dc8be0) (1) Data frame handling I0528 22:18:48.592986 6 log.go:172] (0xc000dc8be0) (1) Data frame sent I0528 22:18:48.593005 6 log.go:172] (0xc000976b00) (0xc000dc8be0) Stream removed, broadcasting: 1 I0528 22:18:48.593021 6 log.go:172] (0xc000976b00) Go away received I0528 22:18:48.593241 6 log.go:172] (0xc000976b00) (0xc000dc8be0) Stream removed, broadcasting: 1 I0528 22:18:48.593271 6 log.go:172] (0xc000976b00) (0xc000bc7220) Stream removed, broadcasting: 3 I0528 22:18:48.593279 6 log.go:172] (0xc000976b00) (0xc000f706e0) Stream removed, broadcasting: 5 May 28 22:18:48.593: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:18:48.593: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7255" for this suite. • [SLOW TEST:30.489 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":254,"skipped":3976,"failed":0} S ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:18:48.601: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 22:18:48.688: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f581a0d3-a5f8-4bba-b608-eaaec32e0f2a" in namespace "downward-api-1584" to be "success or failure" May 28 22:18:48.715: INFO: Pod "downwardapi-volume-f581a0d3-a5f8-4bba-b608-eaaec32e0f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 27.582518ms May 28 22:18:50.720: INFO: Pod "downwardapi-volume-f581a0d3-a5f8-4bba-b608-eaaec32e0f2a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.031940061s May 28 22:18:52.724: INFO: Pod "downwardapi-volume-f581a0d3-a5f8-4bba-b608-eaaec32e0f2a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.03603391s STEP: Saw pod success May 28 22:18:52.724: INFO: Pod "downwardapi-volume-f581a0d3-a5f8-4bba-b608-eaaec32e0f2a" satisfied condition "success or failure" May 28 22:18:52.727: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f581a0d3-a5f8-4bba-b608-eaaec32e0f2a container client-container: STEP: delete the pod May 28 22:18:52.772: INFO: Waiting for pod downwardapi-volume-f581a0d3-a5f8-4bba-b608-eaaec32e0f2a to disappear May 28 22:18:52.780: INFO: Pod downwardapi-volume-f581a0d3-a5f8-4bba-b608-eaaec32e0f2a no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:18:52.780: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-1584" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":3977,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:18:52.788: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 28 22:18:52.878: INFO: Waiting up to 5m0s for pod "pod-44df7ff0-9e29-40cd-835a-e15d4132b37a" in namespace "emptydir-7871" to be "success or failure" May 28 22:18:52.882: INFO: Pod "pod-44df7ff0-9e29-40cd-835a-e15d4132b37a": Phase="Pending", Reason="", readiness=false. Elapsed: 3.820878ms May 28 22:18:54.960: INFO: Pod "pod-44df7ff0-9e29-40cd-835a-e15d4132b37a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081947237s May 28 22:18:56.963: INFO: Pod "pod-44df7ff0-9e29-40cd-835a-e15d4132b37a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.084866723s May 28 22:18:58.967: INFO: Pod "pod-44df7ff0-9e29-40cd-835a-e15d4132b37a": Phase="Running", Reason="", readiness=true. Elapsed: 6.088790957s May 28 22:19:00.971: INFO: Pod "pod-44df7ff0-9e29-40cd-835a-e15d4132b37a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 8.092727656s STEP: Saw pod success May 28 22:19:00.971: INFO: Pod "pod-44df7ff0-9e29-40cd-835a-e15d4132b37a" satisfied condition "success or failure" May 28 22:19:00.974: INFO: Trying to get logs from node jerma-worker2 pod pod-44df7ff0-9e29-40cd-835a-e15d4132b37a container test-container: STEP: delete the pod May 28 22:19:01.010: INFO: Waiting for pod pod-44df7ff0-9e29-40cd-835a-e15d4132b37a to disappear May 28 22:19:01.020: INFO: Pod pod-44df7ff0-9e29-40cd-835a-e15d4132b37a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:19:01.020: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7871" for this suite. • [SLOW TEST:8.239 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":256,"skipped":3997,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:19:01.027: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 28 22:19:01.095: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 28 22:19:01.095: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2661' May 28 22:19:01.479: INFO: stderr: "" May 28 22:19:01.479: INFO: stdout: "service/agnhost-slave created\n" May 28 22:19:01.479: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 28 22:19:01.479: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2661' May 28 22:19:01.819: INFO: stderr: "" May 28 22:19:01.819: INFO: stdout: "service/agnhost-master created\n" May 28 22:19:01.820: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 28 22:19:01.820: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2661' May 28 22:19:02.140: INFO: stderr: "" May 28 22:19:02.140: INFO: stdout: "service/frontend created\n" May 28 22:19:02.141: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 28 22:19:02.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2661' May 28 22:19:02.353: INFO: stderr: "" May 28 22:19:02.353: INFO: stdout: "deployment.apps/frontend created\n" May 28 22:19:02.353: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 28 22:19:02.353: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2661' May 28 22:19:02.638: INFO: stderr: "" May 28 22:19:02.638: INFO: stdout: "deployment.apps/agnhost-master created\n" May 28 22:19:02.638: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 28 22:19:02.638: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-2661' May 28 22:19:02.939: INFO: stderr: "" May 28 22:19:02.939: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 28 22:19:02.939: INFO: Waiting for all frontend pods to be Running. May 28 22:19:12.990: INFO: Waiting for frontend to serve content. May 28 22:19:13.001: INFO: Trying to add a new entry to the guestbook. May 28 22:19:13.013: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 28 22:19:13.020: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2661' May 28 22:19:13.197: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 28 22:19:13.197: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 28 22:19:13.198: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2661' May 28 22:19:13.454: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 28 22:19:13.454: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 28 22:19:13.454: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2661' May 28 22:19:13.608: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 28 22:19:13.608: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 28 22:19:13.608: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2661' May 28 22:19:13.735: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 28 22:19:13.735: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 28 22:19:13.735: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2661' May 28 22:19:13.847: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 28 22:19:13.847: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 28 22:19:13.847: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-2661' May 28 22:19:13.962: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 28 22:19:13.962: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:19:13.962: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2661" for this suite. • [SLOW TEST:12.944 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":257,"skipped":4021,"failed":0} [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:19:13.971: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 22:19:14.105: INFO: Waiting up to 5m0s for pod "downwardapi-volume-60c59960-42b8-4b1f-be37-75f12c156088" in namespace "downward-api-7746" to be "success or failure" May 28 22:19:14.124: INFO: Pod "downwardapi-volume-60c59960-42b8-4b1f-be37-75f12c156088": Phase="Pending", Reason="", readiness=false. Elapsed: 18.748362ms May 28 22:19:16.290: INFO: Pod "downwardapi-volume-60c59960-42b8-4b1f-be37-75f12c156088": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184621123s May 28 22:19:18.328: INFO: Pod "downwardapi-volume-60c59960-42b8-4b1f-be37-75f12c156088": Phase="Running", Reason="", readiness=true. Elapsed: 4.223389638s May 28 22:19:20.802: INFO: Pod "downwardapi-volume-60c59960-42b8-4b1f-be37-75f12c156088": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.697250859s STEP: Saw pod success May 28 22:19:20.802: INFO: Pod "downwardapi-volume-60c59960-42b8-4b1f-be37-75f12c156088" satisfied condition "success or failure" May 28 22:19:20.850: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-60c59960-42b8-4b1f-be37-75f12c156088 container client-container: STEP: delete the pod May 28 22:19:21.334: INFO: Waiting for pod downwardapi-volume-60c59960-42b8-4b1f-be37-75f12c156088 to disappear May 28 22:19:21.346: INFO: Pod downwardapi-volume-60c59960-42b8-4b1f-be37-75f12c156088 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:19:21.346: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7746" for this suite. • [SLOW TEST:7.395 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":258,"skipped":4021,"failed":0} [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:19:21.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 22:19:21.528: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 28 22:19:22.681: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:19:23.961: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-909" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":259,"skipped":4021,"failed":0} SS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:19:23.976: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 22:19:24.667: INFO: Waiting up to 5m0s for pod "downwardapi-volume-d8f7ef1a-47a8-41bb-a360-46d1e70cd3bf" in namespace "projected-1520" to be "success or failure" May 28 22:19:24.690: INFO: Pod "downwardapi-volume-d8f7ef1a-47a8-41bb-a360-46d1e70cd3bf": Phase="Pending", Reason="", readiness=false. Elapsed: 22.956329ms May 28 22:19:26.706: INFO: Pod "downwardapi-volume-d8f7ef1a-47a8-41bb-a360-46d1e70cd3bf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.039766613s May 28 22:19:28.711: INFO: Pod "downwardapi-volume-d8f7ef1a-47a8-41bb-a360-46d1e70cd3bf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044242731s STEP: Saw pod success May 28 22:19:28.711: INFO: Pod "downwardapi-volume-d8f7ef1a-47a8-41bb-a360-46d1e70cd3bf" satisfied condition "success or failure" May 28 22:19:28.714: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-d8f7ef1a-47a8-41bb-a360-46d1e70cd3bf container client-container: STEP: delete the pod May 28 22:19:28.758: INFO: Waiting for pod downwardapi-volume-d8f7ef1a-47a8-41bb-a360-46d1e70cd3bf to disappear May 28 22:19:28.765: INFO: Pod downwardapi-volume-d8f7ef1a-47a8-41bb-a360-46d1e70cd3bf no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:19:28.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1520" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":260,"skipped":4023,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:19:28.773: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 22:19:28.833: INFO: Waiting up to 5m0s for pod "downwardapi-volume-823b5cbb-ce45-4367-b036-06b88262c515" in namespace "projected-4334" to be "success or failure" May 28 22:19:28.846: INFO: Pod "downwardapi-volume-823b5cbb-ce45-4367-b036-06b88262c515": Phase="Pending", Reason="", readiness=false. Elapsed: 12.821452ms May 28 22:19:31.954: INFO: Pod "downwardapi-volume-823b5cbb-ce45-4367-b036-06b88262c515": Phase="Pending", Reason="", readiness=false. Elapsed: 3.121046619s May 28 22:19:34.033: INFO: Pod "downwardapi-volume-823b5cbb-ce45-4367-b036-06b88262c515": Phase="Pending", Reason="", readiness=false. Elapsed: 5.199905975s May 28 22:19:36.037: INFO: Pod "downwardapi-volume-823b5cbb-ce45-4367-b036-06b88262c515": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.204066931s STEP: Saw pod success May 28 22:19:36.037: INFO: Pod "downwardapi-volume-823b5cbb-ce45-4367-b036-06b88262c515" satisfied condition "success or failure" May 28 22:19:36.040: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-823b5cbb-ce45-4367-b036-06b88262c515 container client-container: STEP: delete the pod May 28 22:19:36.060: INFO: Waiting for pod downwardapi-volume-823b5cbb-ce45-4367-b036-06b88262c515 to disappear May 28 22:19:36.065: INFO: Pod downwardapi-volume-823b5cbb-ce45-4367-b036-06b88262c515 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:19:36.065: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4334" for this suite. • [SLOW TEST:7.299 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4053,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:19:36.073: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-9bfc0f87-d002-4929-984b-e4a3465d1242 STEP: Creating a pod to test consume secrets May 28 22:19:36.219: INFO: Waiting up to 5m0s for pod "pod-secrets-48cd3a5c-67a7-435a-b7a8-cc0bb60f05de" in namespace "secrets-6519" to be "success or failure" May 28 22:19:36.234: INFO: Pod "pod-secrets-48cd3a5c-67a7-435a-b7a8-cc0bb60f05de": Phase="Pending", Reason="", readiness=false. Elapsed: 14.892437ms May 28 22:19:38.238: INFO: Pod "pod-secrets-48cd3a5c-67a7-435a-b7a8-cc0bb60f05de": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019062583s May 28 22:19:40.242: INFO: Pod "pod-secrets-48cd3a5c-67a7-435a-b7a8-cc0bb60f05de": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.023072596s STEP: Saw pod success May 28 22:19:40.242: INFO: Pod "pod-secrets-48cd3a5c-67a7-435a-b7a8-cc0bb60f05de" satisfied condition "success or failure" May 28 22:19:40.245: INFO: Trying to get logs from node jerma-worker pod pod-secrets-48cd3a5c-67a7-435a-b7a8-cc0bb60f05de container secret-volume-test: STEP: delete the pod May 28 22:19:40.677: INFO: Waiting for pod pod-secrets-48cd3a5c-67a7-435a-b7a8-cc0bb60f05de to disappear May 28 22:19:40.688: INFO: Pod pod-secrets-48cd3a5c-67a7-435a-b7a8-cc0bb60f05de no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:19:40.688: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-6519" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":262,"skipped":4119,"failed":0} SSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:19:40.694: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 28 22:19:40.822: INFO: Waiting up to 5m0s for pod "var-expansion-045faea5-42ef-416e-bcc1-f185e411a2ba" in namespace "var-expansion-4800" to be "success or failure" May 28 22:19:40.936: INFO: Pod "var-expansion-045faea5-42ef-416e-bcc1-f185e411a2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 114.709403ms May 28 22:19:42.940: INFO: Pod "var-expansion-045faea5-42ef-416e-bcc1-f185e411a2ba": Phase="Pending", Reason="", readiness=false. Elapsed: 2.118356329s May 28 22:19:44.944: INFO: Pod "var-expansion-045faea5-42ef-416e-bcc1-f185e411a2ba": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.122137472s STEP: Saw pod success May 28 22:19:44.944: INFO: Pod "var-expansion-045faea5-42ef-416e-bcc1-f185e411a2ba" satisfied condition "success or failure" May 28 22:19:44.946: INFO: Trying to get logs from node jerma-worker pod var-expansion-045faea5-42ef-416e-bcc1-f185e411a2ba container dapi-container: STEP: delete the pod May 28 22:19:44.989: INFO: Waiting for pod var-expansion-045faea5-42ef-416e-bcc1-f185e411a2ba to disappear May 28 22:19:45.000: INFO: Pod var-expansion-045faea5-42ef-416e-bcc1-f185e411a2ba no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:19:45.000: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4800" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4126,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:19:45.008: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 22:19:45.098: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:19:51.616: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-9545" for this suite. • [SLOW TEST:6.619 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":264,"skipped":4131,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:19:51.628: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 22:19:51.725: INFO: Waiting up to 5m0s for pod "downwardapi-volume-00e7c92b-d282-4704-a029-b889880b5603" in namespace "downward-api-8040" to be "success or failure" May 28 22:19:51.737: INFO: Pod "downwardapi-volume-00e7c92b-d282-4704-a029-b889880b5603": Phase="Pending", Reason="", readiness=false. Elapsed: 11.77034ms May 28 22:19:53.740: INFO: Pod "downwardapi-volume-00e7c92b-d282-4704-a029-b889880b5603": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014952535s May 28 22:19:55.789: INFO: Pod "downwardapi-volume-00e7c92b-d282-4704-a029-b889880b5603": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.063896031s STEP: Saw pod success May 28 22:19:55.789: INFO: Pod "downwardapi-volume-00e7c92b-d282-4704-a029-b889880b5603" satisfied condition "success or failure" May 28 22:19:55.797: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-00e7c92b-d282-4704-a029-b889880b5603 container client-container: STEP: delete the pod May 28 22:19:55.832: INFO: Waiting for pod downwardapi-volume-00e7c92b-d282-4704-a029-b889880b5603 to disappear May 28 22:19:55.862: INFO: Pod downwardapi-volume-00e7c92b-d282-4704-a029-b889880b5603 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:19:55.862: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-8040" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4161,"failed":0} SS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:19:55.869: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 22:19:55.960: INFO: Creating ReplicaSet my-hostname-basic-d16082a2-133e-4baf-a494-6b0222823d7a May 28 22:19:55.982: INFO: Pod name my-hostname-basic-d16082a2-133e-4baf-a494-6b0222823d7a: Found 0 pods out of 1 May 28 22:20:00.985: INFO: Pod name my-hostname-basic-d16082a2-133e-4baf-a494-6b0222823d7a: Found 1 pods out of 1 May 28 22:20:00.985: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-d16082a2-133e-4baf-a494-6b0222823d7a" is running May 28 22:20:00.988: INFO: Pod "my-hostname-basic-d16082a2-133e-4baf-a494-6b0222823d7a-bjp9z" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-28 22:19:56 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-28 22:19:59 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-28 22:19:59 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-28 22:19:55 +0000 UTC Reason: Message:}]) May 28 22:20:00.988: INFO: Trying to dial the pod May 28 22:20:06.028: INFO: Controller my-hostname-basic-d16082a2-133e-4baf-a494-6b0222823d7a: Got expected result from replica 1 [my-hostname-basic-d16082a2-133e-4baf-a494-6b0222823d7a-bjp9z]: "my-hostname-basic-d16082a2-133e-4baf-a494-6b0222823d7a-bjp9z", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:20:06.028: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-2741" for this suite. • [SLOW TEST:10.166 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":266,"skipped":4163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:20:06.036: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 28 22:20:06.816: INFO: Pod name wrapped-volume-race-1118396b-5670-4bdf-8b47-28338599381f: Found 0 pods out of 5 May 28 22:20:11.825: INFO: Pod name wrapped-volume-race-1118396b-5670-4bdf-8b47-28338599381f: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-1118396b-5670-4bdf-8b47-28338599381f in namespace emptydir-wrapper-3589, will wait for the garbage collector to delete the pods May 28 22:20:23.929: INFO: Deleting ReplicationController wrapped-volume-race-1118396b-5670-4bdf-8b47-28338599381f took: 7.933807ms May 28 22:20:24.029: INFO: Terminating ReplicationController wrapped-volume-race-1118396b-5670-4bdf-8b47-28338599381f pods took: 100.237399ms STEP: Creating RC which spawns configmap-volume pods May 28 22:20:39.595: INFO: Pod name wrapped-volume-race-12cbbe27-ac18-4be6-bdf3-8be9c13e221b: Found 0 pods out of 5 May 28 22:20:44.603: INFO: Pod name wrapped-volume-race-12cbbe27-ac18-4be6-bdf3-8be9c13e221b: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-12cbbe27-ac18-4be6-bdf3-8be9c13e221b in namespace emptydir-wrapper-3589, will wait for the garbage collector to delete the pods May 28 22:21:00.752: INFO: Deleting ReplicationController wrapped-volume-race-12cbbe27-ac18-4be6-bdf3-8be9c13e221b took: 14.43431ms May 28 22:21:01.154: INFO: Terminating ReplicationController wrapped-volume-race-12cbbe27-ac18-4be6-bdf3-8be9c13e221b pods took: 401.104007ms STEP: Creating RC which spawns configmap-volume pods May 28 22:21:09.602: INFO: Pod name wrapped-volume-race-172848dd-c66a-45df-a882-7ff5a01509a4: Found 0 pods out of 5 May 28 22:21:14.611: INFO: Pod name wrapped-volume-race-172848dd-c66a-45df-a882-7ff5a01509a4: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-172848dd-c66a-45df-a882-7ff5a01509a4 in namespace emptydir-wrapper-3589, will wait for the garbage collector to delete the pods May 28 22:21:28.691: INFO: Deleting ReplicationController wrapped-volume-race-172848dd-c66a-45df-a882-7ff5a01509a4 took: 6.786186ms May 28 22:21:29.091: INFO: Terminating ReplicationController wrapped-volume-race-172848dd-c66a-45df-a882-7ff5a01509a4 pods took: 400.315369ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:21:39.992: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-3589" for this suite. • [SLOW TEST:94.010 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":267,"skipped":4190,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:21:40.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-7d3b9fdb-a093-4c83-afc0-7f8e834b081e STEP: Creating the pod STEP: Updating configmap configmap-test-upd-7d3b9fdb-a093-4c83-afc0-7f8e834b081e STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:21:46.222: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-653" for this suite. • [SLOW TEST:6.206 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":268,"skipped":4262,"failed":0} SSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:21:46.254: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-f2nb STEP: Creating a pod to test atomic-volume-subpath May 28 22:21:46.391: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-f2nb" in namespace "subpath-9231" to be "success or failure" May 28 22:21:46.427: INFO: Pod "pod-subpath-test-configmap-f2nb": Phase="Pending", Reason="", readiness=false. Elapsed: 35.765696ms May 28 22:21:48.435: INFO: Pod "pod-subpath-test-configmap-f2nb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044241666s May 28 22:21:50.440: INFO: Pod "pod-subpath-test-configmap-f2nb": Phase="Running", Reason="", readiness=true. Elapsed: 4.048998466s May 28 22:21:52.450: INFO: Pod "pod-subpath-test-configmap-f2nb": Phase="Running", Reason="", readiness=true. Elapsed: 6.059208587s May 28 22:21:54.455: INFO: Pod "pod-subpath-test-configmap-f2nb": Phase="Running", Reason="", readiness=true. Elapsed: 8.063803896s May 28 22:21:56.459: INFO: Pod "pod-subpath-test-configmap-f2nb": Phase="Running", Reason="", readiness=true. Elapsed: 10.068118644s May 28 22:21:58.464: INFO: Pod "pod-subpath-test-configmap-f2nb": Phase="Running", Reason="", readiness=true. Elapsed: 12.072643622s May 28 22:22:00.468: INFO: Pod "pod-subpath-test-configmap-f2nb": Phase="Running", Reason="", readiness=true. Elapsed: 14.077224452s May 28 22:22:02.473: INFO: Pod "pod-subpath-test-configmap-f2nb": Phase="Running", Reason="", readiness=true. Elapsed: 16.081402726s May 28 22:22:04.477: INFO: Pod "pod-subpath-test-configmap-f2nb": Phase="Running", Reason="", readiness=true. Elapsed: 18.085660018s May 28 22:22:06.484: INFO: Pod "pod-subpath-test-configmap-f2nb": Phase="Running", Reason="", readiness=true. Elapsed: 20.092517667s May 28 22:22:08.488: INFO: Pod "pod-subpath-test-configmap-f2nb": Phase="Running", Reason="", readiness=true. Elapsed: 22.096665414s May 28 22:22:10.492: INFO: Pod "pod-subpath-test-configmap-f2nb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.101143536s STEP: Saw pod success May 28 22:22:10.492: INFO: Pod "pod-subpath-test-configmap-f2nb" satisfied condition "success or failure" May 28 22:22:10.496: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-f2nb container test-container-subpath-configmap-f2nb: STEP: delete the pod May 28 22:22:10.520: INFO: Waiting for pod pod-subpath-test-configmap-f2nb to disappear May 28 22:22:10.525: INFO: Pod pod-subpath-test-configmap-f2nb no longer exists STEP: Deleting pod pod-subpath-test-configmap-f2nb May 28 22:22:10.525: INFO: Deleting pod "pod-subpath-test-configmap-f2nb" in namespace "subpath-9231" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:22:10.527: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-9231" for this suite. • [SLOW TEST:24.281 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":269,"skipped":4265,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:22:10.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:22:21.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8133" for this suite. • [SLOW TEST:11.163 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":270,"skipped":4300,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:22:21.699: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-9170 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-9170 STEP: creating replication controller externalsvc in namespace services-9170 I0528 22:22:21.981029 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-9170, replica count: 2 I0528 22:22:25.031634 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0528 22:22:28.031927 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 28 22:22:28.085: INFO: Creating new exec pod May 28 22:22:32.151: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-9170 execpodnv4s9 -- /bin/sh -x -c nslookup nodeport-service' May 28 22:22:35.166: INFO: stderr: "I0528 22:22:35.052529 4011 log.go:172] (0xc0000ff6b0) (0xc00023f4a0) Create stream\nI0528 22:22:35.052571 4011 log.go:172] (0xc0000ff6b0) (0xc00023f4a0) Stream added, broadcasting: 1\nI0528 22:22:35.055670 4011 log.go:172] (0xc0000ff6b0) Reply frame received for 1\nI0528 22:22:35.055719 4011 log.go:172] (0xc0000ff6b0) (0xc00023f540) Create stream\nI0528 22:22:35.055747 4011 log.go:172] (0xc0000ff6b0) (0xc00023f540) Stream added, broadcasting: 3\nI0528 22:22:35.056572 4011 log.go:172] (0xc0000ff6b0) Reply frame received for 3\nI0528 22:22:35.056610 4011 log.go:172] (0xc0000ff6b0) (0xc000d10000) Create stream\nI0528 22:22:35.056623 4011 log.go:172] (0xc0000ff6b0) (0xc000d10000) Stream added, broadcasting: 5\nI0528 22:22:35.057826 4011 log.go:172] (0xc0000ff6b0) Reply frame received for 5\nI0528 22:22:35.123852 4011 log.go:172] (0xc0000ff6b0) Data frame received for 5\nI0528 22:22:35.123882 4011 log.go:172] (0xc000d10000) (5) Data frame handling\nI0528 22:22:35.123904 4011 log.go:172] (0xc000d10000) (5) Data frame sent\n+ nslookup nodeport-service\nI0528 22:22:35.150985 4011 log.go:172] (0xc0000ff6b0) Data frame received for 3\nI0528 22:22:35.151019 4011 log.go:172] (0xc00023f540) (3) Data frame handling\nI0528 22:22:35.151040 4011 log.go:172] (0xc00023f540) (3) Data frame sent\nI0528 22:22:35.151914 4011 log.go:172] (0xc0000ff6b0) Data frame received for 3\nI0528 22:22:35.151926 4011 log.go:172] (0xc00023f540) (3) Data frame handling\nI0528 22:22:35.151932 4011 log.go:172] (0xc00023f540) (3) Data frame sent\nI0528 22:22:35.152732 4011 log.go:172] (0xc0000ff6b0) Data frame received for 3\nI0528 22:22:35.152760 4011 log.go:172] (0xc00023f540) (3) Data frame handling\nI0528 22:22:35.152916 4011 log.go:172] (0xc0000ff6b0) Data frame received for 5\nI0528 22:22:35.152936 4011 log.go:172] (0xc000d10000) (5) Data frame handling\nI0528 22:22:35.155235 4011 log.go:172] (0xc0000ff6b0) Data frame received for 1\nI0528 22:22:35.155258 4011 log.go:172] (0xc00023f4a0) (1) Data frame handling\nI0528 22:22:35.155270 4011 log.go:172] (0xc00023f4a0) (1) Data frame sent\nI0528 22:22:35.155285 4011 log.go:172] (0xc0000ff6b0) (0xc00023f4a0) Stream removed, broadcasting: 1\nI0528 22:22:35.155296 4011 log.go:172] (0xc0000ff6b0) Go away received\nI0528 22:22:35.155628 4011 log.go:172] (0xc0000ff6b0) (0xc00023f4a0) Stream removed, broadcasting: 1\nI0528 22:22:35.155649 4011 log.go:172] (0xc0000ff6b0) (0xc00023f540) Stream removed, broadcasting: 3\nI0528 22:22:35.155660 4011 log.go:172] (0xc0000ff6b0) (0xc000d10000) Stream removed, broadcasting: 5\n" May 28 22:22:35.166: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-9170.svc.cluster.local\tcanonical name = externalsvc.services-9170.svc.cluster.local.\nName:\texternalsvc.services-9170.svc.cluster.local\nAddress: 10.109.29.122\n\n" STEP: deleting ReplicationController externalsvc in namespace services-9170, will wait for the garbage collector to delete the pods May 28 22:22:35.226: INFO: Deleting ReplicationController externalsvc took: 6.345751ms May 28 22:22:35.526: INFO: Terminating ReplicationController externalsvc pods took: 300.261385ms May 28 22:22:49.551: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:22:49.567: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-9170" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:27.900 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":271,"skipped":4354,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:22:49.600: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 28 22:22:49.672: INFO: Waiting up to 5m0s for pod "downwardapi-volume-999d91c5-0f5e-45fd-a776-3f024a3d4176" in namespace "projected-4374" to be "success or failure" May 28 22:22:49.675: INFO: Pod "downwardapi-volume-999d91c5-0f5e-45fd-a776-3f024a3d4176": Phase="Pending", Reason="", readiness=false. Elapsed: 3.495577ms May 28 22:22:51.736: INFO: Pod "downwardapi-volume-999d91c5-0f5e-45fd-a776-3f024a3d4176": Phase="Pending", Reason="", readiness=false. Elapsed: 2.06378097s May 28 22:22:53.740: INFO: Pod "downwardapi-volume-999d91c5-0f5e-45fd-a776-3f024a3d4176": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.068206077s STEP: Saw pod success May 28 22:22:53.740: INFO: Pod "downwardapi-volume-999d91c5-0f5e-45fd-a776-3f024a3d4176" satisfied condition "success or failure" May 28 22:22:53.743: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-999d91c5-0f5e-45fd-a776-3f024a3d4176 container client-container: STEP: delete the pod May 28 22:22:53.761: INFO: Waiting for pod downwardapi-volume-999d91c5-0f5e-45fd-a776-3f024a3d4176 to disappear May 28 22:22:53.813: INFO: Pod downwardapi-volume-999d91c5-0f5e-45fd-a776-3f024a3d4176 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:22:53.813: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4374" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":272,"skipped":4408,"failed":0} SSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:22:53.823: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 28 22:22:53.895: INFO: Waiting up to 5m0s for pod "downward-api-8eb3ae25-f9c9-4172-a1a7-03fa915bab97" in namespace "downward-api-7635" to be "success or failure" May 28 22:22:53.898: INFO: Pod "downward-api-8eb3ae25-f9c9-4172-a1a7-03fa915bab97": Phase="Pending", Reason="", readiness=false. Elapsed: 3.710241ms May 28 22:22:55.903: INFO: Pod "downward-api-8eb3ae25-f9c9-4172-a1a7-03fa915bab97": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007916729s May 28 22:22:57.907: INFO: Pod "downward-api-8eb3ae25-f9c9-4172-a1a7-03fa915bab97": Phase="Running", Reason="", readiness=true. Elapsed: 4.012429405s May 28 22:22:59.911: INFO: Pod "downward-api-8eb3ae25-f9c9-4172-a1a7-03fa915bab97": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016611026s STEP: Saw pod success May 28 22:22:59.911: INFO: Pod "downward-api-8eb3ae25-f9c9-4172-a1a7-03fa915bab97" satisfied condition "success or failure" May 28 22:22:59.914: INFO: Trying to get logs from node jerma-worker2 pod downward-api-8eb3ae25-f9c9-4172-a1a7-03fa915bab97 container dapi-container: STEP: delete the pod May 28 22:23:00.008: INFO: Waiting for pod downward-api-8eb3ae25-f9c9-4172-a1a7-03fa915bab97 to disappear May 28 22:23:00.038: INFO: Pod downward-api-8eb3ae25-f9c9-4172-a1a7-03fa915bab97 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:23:00.038: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7635" for this suite. • [SLOW TEST:6.222 seconds] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downward_api.go:33 should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":273,"skipped":4414,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:23:00.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-3c0e4353-0f65-4430-99d7-13c7bc33ceb5 STEP: Creating a pod to test consume configMaps May 28 22:23:00.184: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-e3f5e2bd-f44c-40b9-a634-fd3d75474190" in namespace "projected-2726" to be "success or failure" May 28 22:23:00.187: INFO: Pod "pod-projected-configmaps-e3f5e2bd-f44c-40b9-a634-fd3d75474190": Phase="Pending", Reason="", readiness=false. Elapsed: 3.798337ms May 28 22:23:02.192: INFO: Pod "pod-projected-configmaps-e3f5e2bd-f44c-40b9-a634-fd3d75474190": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008098241s May 28 22:23:04.196: INFO: Pod "pod-projected-configmaps-e3f5e2bd-f44c-40b9-a634-fd3d75474190": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012292309s STEP: Saw pod success May 28 22:23:04.196: INFO: Pod "pod-projected-configmaps-e3f5e2bd-f44c-40b9-a634-fd3d75474190" satisfied condition "success or failure" May 28 22:23:04.198: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-e3f5e2bd-f44c-40b9-a634-fd3d75474190 container projected-configmap-volume-test: STEP: delete the pod May 28 22:23:04.282: INFO: Waiting for pod pod-projected-configmaps-e3f5e2bd-f44c-40b9-a634-fd3d75474190 to disappear May 28 22:23:04.293: INFO: Pod pod-projected-configmaps-e3f5e2bd-f44c-40b9-a634-fd3d75474190 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:23:04.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2726" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":274,"skipped":4429,"failed":0} SSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:23:04.300: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 22:23:04.346: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:23:08.387: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-7781" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4433,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:23:08.394: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 28 22:23:08.486: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-9974' May 28 22:23:08.817: INFO: stderr: "" May 28 22:23:08.817: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 28 22:23:09.821: INFO: Selector matched 1 pods for map[app:agnhost] May 28 22:23:09.821: INFO: Found 0 / 1 May 28 22:23:10.821: INFO: Selector matched 1 pods for map[app:agnhost] May 28 22:23:10.821: INFO: Found 0 / 1 May 28 22:23:11.822: INFO: Selector matched 1 pods for map[app:agnhost] May 28 22:23:11.822: INFO: Found 0 / 1 May 28 22:23:12.822: INFO: Selector matched 1 pods for map[app:agnhost] May 28 22:23:12.822: INFO: Found 1 / 1 May 28 22:23:12.822: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 28 22:23:12.825: INFO: Selector matched 1 pods for map[app:agnhost] May 28 22:23:12.825: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 28 22:23:12.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-4gf54 --namespace=kubectl-9974 -p {"metadata":{"annotations":{"x":"y"}}}' May 28 22:23:12.932: INFO: stderr: "" May 28 22:23:12.932: INFO: stdout: "pod/agnhost-master-4gf54 patched\n" STEP: checking annotations May 28 22:23:12.935: INFO: Selector matched 1 pods for map[app:agnhost] May 28 22:23:12.936: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:23:12.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9974" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":276,"skipped":4445,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:23:12.942: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 28 22:23:13.074: INFO: Creating deployment "webserver-deployment" May 28 22:23:13.099: INFO: Waiting for observed generation 1 May 28 22:23:15.119: INFO: Waiting for all required pods to come up May 28 22:23:15.124: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 28 22:23:25.688: INFO: Waiting for deployment "webserver-deployment" to complete May 28 22:23:25.696: INFO: Updating deployment "webserver-deployment" with a non-existent image May 28 22:23:25.702: INFO: Updating deployment webserver-deployment May 28 22:23:25.702: INFO: Waiting for observed generation 2 May 28 22:23:27.808: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 28 22:23:27.810: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 28 22:23:27.811: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 28 22:23:27.817: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 28 22:23:27.817: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 28 22:23:27.819: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 28 22:23:27.822: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 28 22:23:27.822: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 28 22:23:27.827: INFO: Updating deployment webserver-deployment May 28 22:23:27.827: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 28 22:23:27.986: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 28 22:23:28.035: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 28 22:23:30.721: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-6852 /apis/apps/v1/namespaces/deployment-6852/deployments/webserver-deployment bf19510b-07c9-4c02-a601-592564aa3b3d 19920405 3 2020-05-28 22:23:13 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00471dd38 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:33,UpdatedReplicas:13,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-28 22:23:27 +0000 UTC,LastTransitionTime:2020-05-28 22:23:27 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-28 22:23:28 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 28 22:23:30.917: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-6852 /apis/apps/v1/namespaces/deployment-6852/replicasets/webserver-deployment-c7997dcc8 9a87ecf5-a551-42cb-94db-661c28dca3eb 19920402 3 2020-05-28 22:23:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment bf19510b-07c9-4c02-a601-592564aa3b3d 0xc00418a207 0xc00418a208}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00418a298 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 28 22:23:30.918: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 28 22:23:30.918: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-6852 /apis/apps/v1/namespaces/deployment-6852/replicasets/webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 19920387 3 2020-05-28 22:23:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment bf19510b-07c9-4c02-a601-592564aa3b3d 0xc00418a147 0xc00418a148}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00418a1a8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:20,FullyLabeledReplicas:20,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 28 22:23:31.243: INFO: Pod "webserver-deployment-595b5b9587-2fm4b" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-2fm4b webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-2fm4b b051fefa-15aa-4d7d-acd2-d8d826cd61bc 19920246 0 2020-05-28 22:23:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc00418ab27 0xc00418ab28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.159,StartTime:2020-05-28 22:23:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-28 22:23:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://7853e39d8d1ac4fb04fb956a134a53e93997a8e8798cd0826879dc0bd5a0b8e7,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.159,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.243: INFO: Pod "webserver-deployment-595b5b9587-62h64" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-62h64 webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-62h64 a36d01c0-8cc6-4a81-8353-9f62a35c0807 19920223 0 2020-05-28 22:23:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc00418aea7 0xc00418aea8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.191,StartTime:2020-05-28 22:23:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-28 22:23:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://35ad2815660139ab07543a62d256e64e67e746df47278a5d3ca47a7686f13a14,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.191,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.244: INFO: Pod "webserver-deployment-595b5b9587-6sqtv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-6sqtv webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-6sqtv 48db6b7c-e630-42fc-ae78-6cca50b18fb7 19920452 0 2020-05-28 22:23:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc00418b1c7 0xc00418b1c8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.244: INFO: Pod "webserver-deployment-595b5b9587-89wpx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-89wpx webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-89wpx 08cd4fcd-46a9-4ada-8623-6ad19dd97e2f 19920400 0 2020-05-28 22:23:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc00418b477 0xc00418b478}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.244: INFO: Pod "webserver-deployment-595b5b9587-8cqwq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8cqwq webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-8cqwq bd3ab8f0-3433-4b10-8772-497a18728baf 19920200 0 2020-05-28 22:23:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc00418b737 0xc00418b738}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:20 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.156,StartTime:2020-05-28 22:23:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-28 22:23:19 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://94dfd2294a27827a2a1cca29d63e3ac2176655c6dcb550f348861104f04b40b6,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.156,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.244: INFO: Pod "webserver-deployment-595b5b9587-8lvxh" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-8lvxh webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-8lvxh 9b8fd3bb-2acb-43ea-a5da-094007672ccb 19920256 0 2020-05-28 22:23:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc00418b9b7 0xc00418b9b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.194,StartTime:2020-05-28 22:23:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-28 22:23:24 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://f4602a993a0bb977c426f663d001450847078b60a6762718228f954c43b1aa3f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.194,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.244: INFO: Pod "webserver-deployment-595b5b9587-bvzfj" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-bvzfj webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-bvzfj acd7d793-1150-428d-9dfb-06cb2c28d99c 19920421 0 2020-05-28 22:23:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc00418bce7 0xc00418bce8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.245: INFO: Pod "webserver-deployment-595b5b9587-cqf5p" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-cqf5p webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-cqf5p 6c4c7e65-0d3b-46d8-8cd8-4e6daa92d240 19920413 0 2020-05-28 22:23:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc00418be47 0xc00418be48}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.245: INFO: Pod "webserver-deployment-595b5b9587-fr8tj" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-fr8tj webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-fr8tj ce6b525a-5ac9-4d55-bdf8-5b34d98652a4 19920188 0 2020-05-28 22:23:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc00418bfa7 0xc00418bfa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:18 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.155,StartTime:2020-05-28 22:23:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-28 22:23:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bde7f2148194aca426060a736a578bd5434f8c119e8ebc4c3c9acccbb7c7f15d,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.155,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.245: INFO: Pod "webserver-deployment-595b5b9587-hcg4n" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hcg4n webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-hcg4n 5f1c36b1-23a6-47b6-ac8a-6128c53b11b9 19920447 0 2020-05-28 22:23:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc0007e86a7 0xc0007e86a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.246: INFO: Pod "webserver-deployment-595b5b9587-hqqxc" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hqqxc webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-hqqxc 5cdde57b-7967-4eb9-89ec-1ad3fa319a2b 19920248 0 2020-05-28 22:23:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc0007e89a7 0xc0007e89a8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.192,StartTime:2020-05-28 22:23:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-28 22:23:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ed806e178874b97499192c8717d35c03b11e999e7ad25ce6def391fa8115e69e,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.192,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.246: INFO: Pod "webserver-deployment-595b5b9587-jsfgg" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-jsfgg webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-jsfgg 6436276e-6d59-421e-bba9-e6f344a55f9b 19920461 0 2020-05-28 22:23:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc0007e9097 0xc0007e9098}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.246: INFO: Pod "webserver-deployment-595b5b9587-lljgk" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lljgk webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-lljgk beec0922-2635-4b8d-8998-df12c4134ac6 19920420 0 2020-05-28 22:23:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc000854477 0xc000854478}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.246: INFO: Pod "webserver-deployment-595b5b9587-mtm2m" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-mtm2m webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-mtm2m 27369b5f-8037-4658-b671-79772c96f4b4 19920240 0 2020-05-28 22:23:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc000854fa7 0xc000854fa8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:24 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.193,StartTime:2020-05-28 22:23:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-28 22:23:23 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://32d689d8cc273eed0e87b1bd41a057f36674136fbee058e3e0ad088a9fcb146b,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.193,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.246: INFO: Pod "webserver-deployment-595b5b9587-p5kkz" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-p5kkz webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-p5kkz 890fa2a6-f461-4bd9-b27c-e3a024f996fa 19920406 0 2020-05-28 22:23:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc00376c4f7 0xc00376c4f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.247: INFO: Pod "webserver-deployment-595b5b9587-rmb5z" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rmb5z webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-rmb5z 07d76e52-dd5e-448f-ba59-6f4b5cd38c9d 19920450 0 2020-05-28 22:23:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc00376c727 0xc00376c728}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.247: INFO: Pod "webserver-deployment-595b5b9587-rqrpm" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-rqrpm webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-rqrpm 4d7cd586-4553-413a-8bd6-d7b0de5b6571 19920219 0 2020-05-28 22:23:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc00376c887 0xc00376c888}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:22 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.157,StartTime:2020-05-28 22:23:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-28 22:23:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://78435078e366f0a79901cbfb4460cd56ac2ff299c16f58ba53a9949cb8927e22,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.157,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.247: INFO: Pod "webserver-deployment-595b5b9587-srnsb" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-srnsb webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-srnsb 62e366ca-1329-4693-9a0c-1ef1eeefafd6 19920415 0 2020-05-28 22:23:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc00376ca07 0xc00376ca08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.247: INFO: Pod "webserver-deployment-595b5b9587-wfnsv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-wfnsv webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-wfnsv 8fa8e469-5696-4aa8-8de5-984021b128db 19920388 0 2020-05-28 22:23:27 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc00376cb67 0xc00376cb68}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.248: INFO: Pod "webserver-deployment-595b5b9587-zts5p" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-zts5p webserver-deployment-595b5b9587- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-595b5b9587-zts5p e57f5081-62b1-4acd-ae18-9b4b76603f87 19920448 0 2020-05-28 22:23:28 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 62bbffff-eadb-42ec-aece-e98a8c2586cf 0xc00376ccc7 0xc00376ccc8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.248: INFO: Pod "webserver-deployment-c7997dcc8-75nv5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-75nv5 webserver-deployment-c7997dcc8- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-c7997dcc8-75nv5 5175bf44-c13d-42b2-9992-b70ec1c8cc62 19920457 0 2020-05-28 22:23:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9a87ecf5-a551-42cb-94db-661c28dca3eb 0xc00376ce27 0xc00376ce28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.160,StartTime:2020-05-28 22:23:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.160,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.248: INFO: Pod "webserver-deployment-c7997dcc8-8hqd4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8hqd4 webserver-deployment-c7997dcc8- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-c7997dcc8-8hqd4 61ec3392-5154-4012-a5d5-1ce0926e37b6 19920432 0 2020-05-28 22:23:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9a87ecf5-a551-42cb-94db-661c28dca3eb 0xc00376cfd7 0xc00376cfd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.248: INFO: Pod "webserver-deployment-c7997dcc8-8jl5l" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-8jl5l webserver-deployment-c7997dcc8- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-c7997dcc8-8jl5l a216673b-5849-4e9a-9baa-9612aba88dbe 19920408 0 2020-05-28 22:23:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9a87ecf5-a551-42cb-94db-661c28dca3eb 0xc00376d157 0xc00376d158}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:27 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.249: INFO: Pod "webserver-deployment-c7997dcc8-fjts6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fjts6 webserver-deployment-c7997dcc8- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-c7997dcc8-fjts6 93a930fa-d4f4-4426-8101-96992b97f2a0 19920465 0 2020-05-28 22:23:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9a87ecf5-a551-42cb-94db-661c28dca3eb 0xc00376d2d7 0xc00376d2d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.249: INFO: Pod "webserver-deployment-c7997dcc8-fkpjm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fkpjm webserver-deployment-c7997dcc8- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-c7997dcc8-fkpjm b34fbcab-ec4a-4d17-abe2-eb715061b8b9 19920436 0 2020-05-28 22:23:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9a87ecf5-a551-42cb-94db-661c28dca3eb 0xc00376d457 0xc00376d458}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.249: INFO: Pod "webserver-deployment-c7997dcc8-fv5bm" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fv5bm webserver-deployment-c7997dcc8- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-c7997dcc8-fv5bm 4cc268e0-6c19-49e5-bb59-a0fbfe365a29 19920467 0 2020-05-28 22:23:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9a87ecf5-a551-42cb-94db-661c28dca3eb 0xc00376d5d7 0xc00376d5d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.196,StartTime:2020-05-28 22:23:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ErrImagePull,Message:rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/webserver:404": failed to resolve reference "docker.io/library/webserver:404": pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.196,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.249: INFO: Pod "webserver-deployment-c7997dcc8-jzbqg" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-jzbqg webserver-deployment-c7997dcc8- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-c7997dcc8-jzbqg 9ea34460-6d50-4d9b-b39c-db87954ca505 19920424 0 2020-05-28 22:23:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9a87ecf5-a551-42cb-94db-661c28dca3eb 0xc00376d787 0xc00376d788}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.250: INFO: Pod "webserver-deployment-c7997dcc8-kbkhh" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kbkhh webserver-deployment-c7997dcc8- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-c7997dcc8-kbkhh 5f34a6e2-aae6-46a2-b0a2-685b6cb11fec 19920417 0 2020-05-28 22:23:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9a87ecf5-a551-42cb-94db-661c28dca3eb 0xc00376d907 0xc00376d908}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.250: INFO: Pod "webserver-deployment-c7997dcc8-kk4t8" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-kk4t8 webserver-deployment-c7997dcc8- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-c7997dcc8-kk4t8 315c2731-b729-4dbd-ab38-d5099db85337 19920423 0 2020-05-28 22:23:28 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9a87ecf5-a551-42cb-94db-661c28dca3eb 0xc00376da87 0xc00376da88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.250: INFO: Pod "webserver-deployment-c7997dcc8-mxbdz" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-mxbdz webserver-deployment-c7997dcc8- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-c7997dcc8-mxbdz 674e701e-76d2-4562-b0be-db76af7ed62a 19920298 0 2020-05-28 22:23:25 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9a87ecf5-a551-42cb-94db-661c28dca3eb 0xc00376dc07 0xc00376dc08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:25 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-28 22:23:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.250: INFO: Pod "webserver-deployment-c7997dcc8-r78pr" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-r78pr webserver-deployment-c7997dcc8- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-c7997dcc8-r78pr 8a096620-eecf-4ecc-a67c-8b13abf22f13 19920319 0 2020-05-28 22:23:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9a87ecf5-a551-42cb-94db-661c28dca3eb 0xc00376dd87 0xc00376dd88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-28 22:23:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.250: INFO: Pod "webserver-deployment-c7997dcc8-xz8cb" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xz8cb webserver-deployment-c7997dcc8- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-c7997dcc8-xz8cb f7b69e2d-9581-408f-a403-9db92b8cde8f 19920320 0 2020-05-28 22:23:26 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9a87ecf5-a551-42cb-94db-661c28dca3eb 0xc00376df07 0xc00376df08}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:26 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:26 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-28 22:23:26 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 28 22:23:31.251: INFO: Pod "webserver-deployment-c7997dcc8-zkpq4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-zkpq4 webserver-deployment-c7997dcc8- deployment-6852 /api/v1/namespaces/deployment-6852/pods/webserver-deployment-c7997dcc8-zkpq4 f459a054-8ee7-4fe5-9587-2f4a6db621ea 19920411 0 2020-05-28 22:23:27 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 9a87ecf5-a551-42cb-94db-661c28dca3eb 0xc001f6c087 0xc001f6c088}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-2mfv8,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-2mfv8,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-2mfv8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-28 22:23:28 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-28 22:23:28 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:23:31.251: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-6852" for this suite. • [SLOW TEST:19.177 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":277,"skipped":4508,"failed":0} SS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 28 22:23:32.120: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 28 22:23:33.016: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 28 22:23:33.248: INFO: Waiting for terminating namespaces to be deleted... May 28 22:23:33.251: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 28 22:23:33.362: INFO: webserver-deployment-595b5b9587-lljgk from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.362: INFO: webserver-deployment-c7997dcc8-kk4t8 from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.362: INFO: webserver-deployment-c7997dcc8-8hqd4 from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.362: INFO: webserver-deployment-c7997dcc8-fjts6 from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.362: INFO: webserver-deployment-c7997dcc8-xz8cb from deployment-6852 started at 2020-05-28 22:23:26 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.362: INFO: webserver-deployment-595b5b9587-wfnsv from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: true, restart count 0 May 28 22:23:33.362: INFO: webserver-deployment-c7997dcc8-75nv5 from deployment-6852 started at 2020-05-28 22:23:25 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.362: INFO: webserver-deployment-595b5b9587-srnsb from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.362: INFO: webserver-deployment-595b5b9587-hcg4n from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.362: INFO: webserver-deployment-595b5b9587-8cqwq from deployment-6852 started at 2020-05-28 22:23:13 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: true, restart count 0 May 28 22:23:33.362: INFO: webserver-deployment-595b5b9587-2fm4b from deployment-6852 started at 2020-05-28 22:23:13 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: true, restart count 0 May 28 22:23:33.362: INFO: webserver-deployment-595b5b9587-rmb5z from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.362: INFO: webserver-deployment-c7997dcc8-mxbdz from deployment-6852 started at 2020-05-28 22:23:25 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.362: INFO: webserver-deployment-595b5b9587-fr8tj from deployment-6852 started at 2020-05-28 22:23:13 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: true, restart count 0 May 28 22:23:33.362: INFO: webserver-deployment-595b5b9587-rqrpm from deployment-6852 started at 2020-05-28 22:23:13 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: true, restart count 0 May 28 22:23:33.362: INFO: webserver-deployment-595b5b9587-p5kkz from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.362: INFO: webserver-deployment-c7997dcc8-zkpq4 from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.362: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container kindnet-cni ready: true, restart count 2 May 28 22:23:33.362: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 22:23:33.362: INFO: Container kube-proxy ready: true, restart count 0 May 28 22:23:33.362: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 28 22:23:33.565: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 28 22:23:33.565: INFO: Container kube-hunter ready: false, restart count 0 May 28 22:23:33.565: INFO: pod-logs-websocket-fd6ebc6e-6af6-450c-bc54-496dd3d958a1 from pods-7781 started at 2020-05-28 22:23:04 +0000 UTC (1 container statuses recorded) May 28 22:23:33.565: INFO: Container main ready: true, restart count 0 May 28 22:23:33.565: INFO: webserver-deployment-595b5b9587-8lvxh from deployment-6852 started at 2020-05-28 22:23:13 +0000 UTC (1 container statuses recorded) May 28 22:23:33.565: INFO: Container httpd ready: true, restart count 0 May 28 22:23:33.565: INFO: webserver-deployment-595b5b9587-cqf5p from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.565: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.565: INFO: webserver-deployment-595b5b9587-62h64 from deployment-6852 started at 2020-05-28 22:23:13 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container httpd ready: true, restart count 0 May 28 22:23:33.566: INFO: webserver-deployment-c7997dcc8-r78pr from deployment-6852 started at 2020-05-28 22:23:26 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.566: INFO: webserver-deployment-c7997dcc8-fv5bm from deployment-6852 started at 2020-05-28 22:23:25 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.566: INFO: webserver-deployment-c7997dcc8-jzbqg from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.566: INFO: webserver-deployment-c7997dcc8-fkpjm from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.566: INFO: webserver-deployment-595b5b9587-6sqtv from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.566: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container kindnet-cni ready: true, restart count 2 May 28 22:23:33.566: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container kube-bench ready: false, restart count 0 May 28 22:23:33.566: INFO: webserver-deployment-595b5b9587-hqqxc from deployment-6852 started at 2020-05-28 22:23:13 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container httpd ready: true, restart count 0 May 28 22:23:33.566: INFO: webserver-deployment-595b5b9587-89wpx from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.566: INFO: webserver-deployment-c7997dcc8-kbkhh from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.566: INFO: webserver-deployment-595b5b9587-bvzfj from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.566: INFO: webserver-deployment-595b5b9587-jsfgg from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.566: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container kube-proxy ready: true, restart count 0 May 28 22:23:33.566: INFO: webserver-deployment-595b5b9587-mtm2m from deployment-6852 started at 2020-05-28 22:23:13 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container httpd ready: true, restart count 0 May 28 22:23:33.566: INFO: webserver-deployment-c7997dcc8-8jl5l from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container httpd ready: false, restart count 0 May 28 22:23:33.566: INFO: webserver-deployment-595b5b9587-zts5p from deployment-6852 started at 2020-05-28 22:23:28 +0000 UTC (1 container statuses recorded) May 28 22:23:33.566: INFO: Container httpd ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-2a0e15fe-98e5-47b8-82b7-d47702938bad 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-2a0e15fe-98e5-47b8-82b7-d47702938bad off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-2a0e15fe-98e5-47b8-82b7-d47702938bad [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 28 22:24:02.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7185" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:30.836 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":278,"skipped":4510,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSMay 28 22:24:02.957: INFO: Running AfterSuite actions on all nodes May 28 22:24:02.957: INFO: Running AfterSuite actions on node 1 May 28 22:24:02.957: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4448.146 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS