I0526 21:09:24.811654 6 test_context.go:419] Tolerating taints "node-role.kubernetes.io/master" when considering if nodes are ready I0526 21:09:24.811883 6 e2e.go:109] Starting e2e run "6162fb5d-4c59-4505-bec6-b543f3144a54" on Ginkgo node 1 {"msg":"Test Suite starting","total":278,"completed":0,"skipped":0,"failed":0} Running Suite: Kubernetes e2e suite =================================== Random Seed: 1590527363 - Will randomize all specs Will run 278 of 4842 specs May 26 21:09:24.878: INFO: >>> kubeConfig: /root/.kube/config May 26 21:09:24.883: INFO: Waiting up to 30m0s for all (but 0) nodes to be schedulable May 26 21:09:24.907: INFO: Waiting up to 10m0s for all pods (need at least 0) in namespace 'kube-system' to be running and ready May 26 21:09:24.946: INFO: 12 / 12 pods in namespace 'kube-system' are running and ready (0 seconds elapsed) May 26 21:09:24.946: INFO: expected 2 pod replicas in namespace 'kube-system', 2 are Running and Ready. May 26 21:09:24.946: INFO: Waiting up to 5m0s for all daemonsets in namespace 'kube-system' to start May 26 21:09:24.954: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kindnet' (0 seconds elapsed) May 26 21:09:24.954: INFO: 3 / 3 pods ready in namespace 'kube-system' in daemonset 'kube-proxy' (0 seconds elapsed) May 26 21:09:24.954: INFO: e2e test version: v1.17.4 May 26 21:09:24.955: INFO: kube-apiserver version: v1.17.2 May 26 21:09:24.955: INFO: >>> kubeConfig: /root/.kube/config May 26 21:09:24.960: INFO: Cluster IP family: ipv4 SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:09:24.961: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl May 26 21:09:25.057: INFO: No PodSecurityPolicies found; assuming PodSecurityPolicy is disabled. STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1585 [It] should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 26 21:09:25.059: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-5728' May 26 21:09:27.680: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 26 21:09:27.680: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created May 26 21:09:27.688: INFO: Waiting for rc e2e-test-httpd-rc to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0 STEP: rolling-update to same image controller May 26 21:09:27.737: INFO: scanned /root for discovery docs: May 26 21:09:27.737: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update e2e-test-httpd-rc --update-period=1s --image=docker.io/library/httpd:2.4.38-alpine --image-pull-policy=IfNotPresent --namespace=kubectl-5728' May 26 21:09:43.662: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 26 21:09:43.662: INFO: stdout: "Created e2e-test-httpd-rc-173f8bd5f91db36bc9836265547bd5c1\nScaling up e2e-test-httpd-rc-173f8bd5f91db36bc9836265547bd5c1 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-173f8bd5f91db36bc9836265547bd5c1 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-173f8bd5f91db36bc9836265547bd5c1 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" May 26 21:09:43.662: INFO: stdout: "Created e2e-test-httpd-rc-173f8bd5f91db36bc9836265547bd5c1\nScaling up e2e-test-httpd-rc-173f8bd5f91db36bc9836265547bd5c1 from 0 to 1, scaling down e2e-test-httpd-rc from 1 to 0 (keep 1 pods available, don't exceed 2 pods)\nScaling e2e-test-httpd-rc-173f8bd5f91db36bc9836265547bd5c1 up to 1\nScaling e2e-test-httpd-rc down to 0\nUpdate succeeded. Deleting old controller: e2e-test-httpd-rc\nRenaming e2e-test-httpd-rc-173f8bd5f91db36bc9836265547bd5c1 to e2e-test-httpd-rc\nreplicationcontroller/e2e-test-httpd-rc rolling updated\n" STEP: waiting for all containers in run=e2e-test-httpd-rc pods to come up. May 26 21:09:43.662: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l run=e2e-test-httpd-rc --namespace=kubectl-5728' May 26 21:09:43.762: INFO: stderr: "" May 26 21:09:43.762: INFO: stdout: "e2e-test-httpd-rc-173f8bd5f91db36bc9836265547bd5c1-jcmm6 " May 26 21:09:43.762: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-173f8bd5f91db36bc9836265547bd5c1-jcmm6 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "e2e-test-httpd-rc") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-5728' May 26 21:09:43.848: INFO: stderr: "" May 26 21:09:43.849: INFO: stdout: "true" May 26 21:09:43.849: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods e2e-test-httpd-rc-173f8bd5f91db36bc9836265547bd5c1-jcmm6 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "e2e-test-httpd-rc"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-5728' May 26 21:09:43.933: INFO: stderr: "" May 26 21:09:43.933: INFO: stdout: "docker.io/library/httpd:2.4.38-alpine" May 26 21:09:43.933: INFO: e2e-test-httpd-rc-173f8bd5f91db36bc9836265547bd5c1-jcmm6 is verified up and running [AfterEach] Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1591 May 26 21:09:43.933: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-5728' May 26 21:09:44.032: INFO: stderr: "" May 26 21:09:44.032: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:09:44.032: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5728" for this suite. • [SLOW TEST:19.085 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl rolling-update /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1580 should support rolling-update to same image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl rolling-update should support rolling-update to same image [Conformance]","total":278,"completed":1,"skipped":23,"failed":0} SSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:09:44.045: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create secret due to empty secret key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name secret-emptykey-test-4f7307af-f37f-4142-a533-25de1b2aca61 [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:09:44.095: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5860" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should fail to create secret due to empty secret key [Conformance]","total":278,"completed":2,"skipped":34,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:09:44.130: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:09:48.649: INFO: Waiting up to 5m0s for pod "client-envvars-509c162b-bb29-400a-8f2f-b8fc0ecff2c7" in namespace "pods-2418" to be "success or failure" May 26 21:09:48.663: INFO: Pod "client-envvars-509c162b-bb29-400a-8f2f-b8fc0ecff2c7": Phase="Pending", Reason="", readiness=false. Elapsed: 13.38551ms May 26 21:09:50.668: INFO: Pod "client-envvars-509c162b-bb29-400a-8f2f-b8fc0ecff2c7": Phase="Pending", Reason="", readiness=false. Elapsed: 2.018066562s May 26 21:09:52.675: INFO: Pod "client-envvars-509c162b-bb29-400a-8f2f-b8fc0ecff2c7": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02523276s STEP: Saw pod success May 26 21:09:52.675: INFO: Pod "client-envvars-509c162b-bb29-400a-8f2f-b8fc0ecff2c7" satisfied condition "success or failure" May 26 21:09:52.692: INFO: Trying to get logs from node jerma-worker2 pod client-envvars-509c162b-bb29-400a-8f2f-b8fc0ecff2c7 container env3cont: STEP: delete the pod May 26 21:09:52.878: INFO: Waiting for pod client-envvars-509c162b-bb29-400a-8f2f-b8fc0ecff2c7 to disappear May 26 21:09:53.075: INFO: Pod client-envvars-509c162b-bb29-400a-8f2f-b8fc0ecff2c7 no longer exists [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:09:53.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2418" for this suite. • [SLOW TEST:9.026 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should contain environment variables for services [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should contain environment variables for services [NodeConformance] [Conformance]","total":278,"completed":3,"skipped":55,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:09:53.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if v1 is in available api versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating api versions May 26 21:09:53.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config api-versions' May 26 21:09:53.442: INFO: stderr: "" May 26 21:09:53.442: INFO: stdout: "admissionregistration.k8s.io/v1\nadmissionregistration.k8s.io/v1beta1\napiextensions.k8s.io/v1\napiextensions.k8s.io/v1beta1\napiregistration.k8s.io/v1\napiregistration.k8s.io/v1beta1\napps/v1\nauthentication.k8s.io/v1\nauthentication.k8s.io/v1beta1\nauthorization.k8s.io/v1\nauthorization.k8s.io/v1beta1\nautoscaling/v1\nautoscaling/v2beta1\nautoscaling/v2beta2\nbatch/v1\nbatch/v1beta1\ncertificates.k8s.io/v1beta1\ncoordination.k8s.io/v1\ncoordination.k8s.io/v1beta1\ndiscovery.k8s.io/v1beta1\nevents.k8s.io/v1beta1\nextensions/v1beta1\nnetworking.k8s.io/v1\nnetworking.k8s.io/v1beta1\nnode.k8s.io/v1beta1\npolicy/v1beta1\nrbac.authorization.k8s.io/v1\nrbac.authorization.k8s.io/v1beta1\nscheduling.k8s.io/v1\nscheduling.k8s.io/v1beta1\nstorage.k8s.io/v1\nstorage.k8s.io/v1beta1\nv1\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:09:53.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2416" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl api-versions should check if v1 is in available api versions [Conformance]","total":278,"completed":4,"skipped":64,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:09:53.451: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting an echo server on multiple ports STEP: creating replication controller proxy-service-6tqgw in namespace proxy-4212 I0526 21:09:53.559415 6 runners.go:189] Created replication controller with name: proxy-service-6tqgw, namespace: proxy-4212, replica count: 1 I0526 21:09:54.609860 6 runners.go:189] proxy-service-6tqgw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 21:09:55.610101 6 runners.go:189] proxy-service-6tqgw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 21:09:56.610380 6 runners.go:189] proxy-service-6tqgw Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 21:09:57.610594 6 runners.go:189] proxy-service-6tqgw Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 21:09:57.614: INFO: setup took 4.12410517s, starting test cases STEP: running 16 cases, 20 attempts per case, 320 total attempts May 26 21:09:57.621: INFO: (0) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 7.028667ms) May 26 21:09:57.621: INFO: (0) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 7.370192ms) May 26 21:09:57.622: INFO: (0) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 7.347625ms) May 26 21:09:57.622: INFO: (0) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname2/proxy/: bar (200; 7.387615ms) May 26 21:09:57.622: INFO: (0) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname1/proxy/: foo (200; 7.537514ms) May 26 21:09:57.622: INFO: (0) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname2/proxy/: bar (200; 7.67567ms) May 26 21:09:57.627: INFO: (0) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9/proxy/: test (200; 12.513837ms) May 26 21:09:57.627: INFO: (0) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname1/proxy/: foo (200; 12.769001ms) May 26 21:09:57.627: INFO: (0) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 12.780958ms) May 26 21:09:57.627: INFO: (0) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:1080/proxy/: ... (200; 12.86578ms) May 26 21:09:57.628: INFO: (0) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:1080/proxy/: test<... (200; 13.3559ms) May 26 21:09:57.635: INFO: (0) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: ... (200; 22.490901ms) May 26 21:09:57.686: INFO: (1) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 22.399154ms) May 26 21:09:57.686: INFO: (1) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9/proxy/: test (200; 22.645265ms) May 26 21:09:57.687: INFO: (1) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:460/proxy/: tls baz (200; 23.447507ms) May 26 21:09:57.687: INFO: (1) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 23.314822ms) May 26 21:09:57.687: INFO: (1) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 23.373485ms) May 26 21:09:57.687: INFO: (1) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 23.608514ms) May 26 21:09:57.687: INFO: (1) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:1080/proxy/: test<... (200; 23.698792ms) May 26 21:09:57.687: INFO: (1) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 23.961556ms) May 26 21:09:57.689: INFO: (1) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname1/proxy/: foo (200; 25.849509ms) May 26 21:09:57.689: INFO: (1) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname2/proxy/: bar (200; 26.046147ms) May 26 21:09:57.689: INFO: (1) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname1/proxy/: tls baz (200; 26.190195ms) May 26 21:09:57.689: INFO: (1) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname1/proxy/: foo (200; 26.009874ms) May 26 21:09:57.689: INFO: (1) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: test<... (200; 5.30808ms) May 26 21:09:57.695: INFO: (2) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 5.367176ms) May 26 21:09:57.696: INFO: (2) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: test (200; 6.208074ms) May 26 21:09:57.696: INFO: (2) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname2/proxy/: bar (200; 6.123282ms) May 26 21:09:57.696: INFO: (2) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 6.316075ms) May 26 21:09:57.696: INFO: (2) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 6.185876ms) May 26 21:09:57.696: INFO: (2) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname1/proxy/: foo (200; 6.538645ms) May 26 21:09:57.697: INFO: (2) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname2/proxy/: bar (200; 7.228635ms) May 26 21:09:57.697: INFO: (2) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname1/proxy/: tls baz (200; 7.397784ms) May 26 21:09:57.697: INFO: (2) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname1/proxy/: foo (200; 7.495372ms) May 26 21:09:57.697: INFO: (2) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname2/proxy/: tls qux (200; 7.543168ms) May 26 21:09:57.717: INFO: (2) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:1080/proxy/: ... (200; 27.4114ms) May 26 21:09:57.724: INFO: (3) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname2/proxy/: bar (200; 6.489539ms) May 26 21:09:57.724: INFO: (3) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 6.596595ms) May 26 21:09:57.724: INFO: (3) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9/proxy/: test (200; 6.669919ms) May 26 21:09:57.724: INFO: (3) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname1/proxy/: foo (200; 6.732086ms) May 26 21:09:57.724: INFO: (3) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname1/proxy/: foo (200; 6.707814ms) May 26 21:09:57.724: INFO: (3) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: ... (200; 6.942379ms) May 26 21:09:57.725: INFO: (3) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 8.053068ms) May 26 21:09:57.725: INFO: (3) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:1080/proxy/: test<... (200; 8.14547ms) May 26 21:09:57.726: INFO: (3) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname1/proxy/: tls baz (200; 8.155607ms) May 26 21:09:57.726: INFO: (3) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname2/proxy/: tls qux (200; 8.20684ms) May 26 21:09:57.726: INFO: (3) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:460/proxy/: tls baz (200; 8.225516ms) May 26 21:09:57.726: INFO: (3) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 8.820744ms) May 26 21:09:57.726: INFO: (3) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 8.787049ms) May 26 21:09:57.726: INFO: (3) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 8.724449ms) May 26 21:09:57.726: INFO: (3) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname2/proxy/: bar (200; 8.852712ms) May 26 21:09:57.729: INFO: (4) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9/proxy/: test (200; 2.963852ms) May 26 21:09:57.730: INFO: (4) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 3.802071ms) May 26 21:09:57.730: INFO: (4) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:460/proxy/: tls baz (200; 3.933466ms) May 26 21:09:57.731: INFO: (4) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 4.364133ms) May 26 21:09:57.731: INFO: (4) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: ... (200; 4.369977ms) May 26 21:09:57.731: INFO: (4) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 4.302718ms) May 26 21:09:57.731: INFO: (4) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:1080/proxy/: test<... (200; 4.645613ms) May 26 21:09:57.731: INFO: (4) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 4.700505ms) May 26 21:09:57.731: INFO: (4) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 4.94318ms) May 26 21:09:57.732: INFO: (4) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname1/proxy/: foo (200; 5.473966ms) May 26 21:09:57.732: INFO: (4) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname2/proxy/: bar (200; 5.698841ms) May 26 21:09:57.732: INFO: (4) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname1/proxy/: foo (200; 5.689455ms) May 26 21:09:57.732: INFO: (4) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname2/proxy/: bar (200; 5.730114ms) May 26 21:09:57.732: INFO: (4) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname2/proxy/: tls qux (200; 5.7322ms) May 26 21:09:57.732: INFO: (4) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname1/proxy/: tls baz (200; 6.034959ms) May 26 21:09:57.736: INFO: (5) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 3.675241ms) May 26 21:09:57.736: INFO: (5) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 4.000079ms) May 26 21:09:57.737: INFO: (5) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 4.455817ms) May 26 21:09:57.737: INFO: (5) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:460/proxy/: tls baz (200; 4.558587ms) May 26 21:09:57.737: INFO: (5) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 4.575526ms) May 26 21:09:57.737: INFO: (5) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9/proxy/: test (200; 4.648056ms) May 26 21:09:57.737: INFO: (5) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:1080/proxy/: ... (200; 4.636992ms) May 26 21:09:57.737: INFO: (5) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:1080/proxy/: test<... (200; 4.65085ms) May 26 21:09:57.737: INFO: (5) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 4.732636ms) May 26 21:09:57.737: INFO: (5) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: test (200; 13.849363ms) May 26 21:09:57.752: INFO: (6) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname2/proxy/: tls qux (200; 13.997539ms) May 26 21:09:57.753: INFO: (6) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 15.026606ms) May 26 21:09:57.754: INFO: (6) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:1080/proxy/: ... (200; 15.411668ms) May 26 21:09:57.754: INFO: (6) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 15.492157ms) May 26 21:09:57.754: INFO: (6) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:1080/proxy/: test<... (200; 15.456467ms) May 26 21:09:57.761: INFO: (6) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: test<... (200; 7.099817ms) May 26 21:09:57.769: INFO: (7) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 7.595884ms) May 26 21:09:57.772: INFO: (7) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 9.768671ms) May 26 21:09:57.772: INFO: (7) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: test (200; 10.291501ms) May 26 21:09:57.772: INFO: (7) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 10.173881ms) May 26 21:09:57.772: INFO: (7) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:460/proxy/: tls baz (200; 10.031797ms) May 26 21:09:57.772: INFO: (7) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 10.339821ms) May 26 21:09:57.772: INFO: (7) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 9.874201ms) May 26 21:09:57.772: INFO: (7) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:1080/proxy/: ... (200; 10.247197ms) May 26 21:09:57.776: INFO: (7) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname1/proxy/: foo (200; 14.632172ms) May 26 21:09:57.776: INFO: (7) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname1/proxy/: tls baz (200; 14.622941ms) May 26 21:09:57.776: INFO: (7) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname2/proxy/: tls qux (200; 14.30474ms) May 26 21:09:57.776: INFO: (7) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname1/proxy/: foo (200; 14.165484ms) May 26 21:09:57.776: INFO: (7) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname2/proxy/: bar (200; 14.411084ms) May 26 21:09:57.777: INFO: (7) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname2/proxy/: bar (200; 14.523213ms) May 26 21:09:57.781: INFO: (8) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 4.282404ms) May 26 21:09:57.782: INFO: (8) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9/proxy/: test (200; 4.989158ms) May 26 21:09:57.782: INFO: (8) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:1080/proxy/: ... (200; 4.864907ms) May 26 21:09:57.782: INFO: (8) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname2/proxy/: bar (200; 4.927525ms) May 26 21:09:57.782: INFO: (8) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 4.934015ms) May 26 21:09:57.782: INFO: (8) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: test<... (200; 5.045305ms) May 26 21:09:57.782: INFO: (8) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname2/proxy/: bar (200; 5.359978ms) May 26 21:09:57.782: INFO: (8) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 5.473165ms) May 26 21:09:57.782: INFO: (8) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname1/proxy/: foo (200; 5.453977ms) May 26 21:09:57.782: INFO: (8) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 5.520609ms) May 26 21:09:57.788: INFO: (9) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 5.952595ms) May 26 21:09:57.788: INFO: (9) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 6.008242ms) May 26 21:09:57.789: INFO: (9) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:1080/proxy/: test<... (200; 6.351539ms) May 26 21:09:57.789: INFO: (9) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:460/proxy/: tls baz (200; 6.594775ms) May 26 21:09:57.790: INFO: (9) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9/proxy/: test (200; 7.563703ms) May 26 21:09:57.790: INFO: (9) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 7.813817ms) May 26 21:09:57.790: INFO: (9) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 7.827897ms) May 26 21:09:57.790: INFO: (9) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: ... (200; 8.176956ms) May 26 21:09:57.791: INFO: (9) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname1/proxy/: foo (200; 8.291826ms) May 26 21:09:57.791: INFO: (9) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname2/proxy/: bar (200; 8.349498ms) May 26 21:09:57.791: INFO: (9) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname1/proxy/: tls baz (200; 8.370748ms) May 26 21:09:57.791: INFO: (9) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname2/proxy/: tls qux (200; 8.448204ms) May 26 21:09:57.791: INFO: (9) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname1/proxy/: foo (200; 8.473696ms) May 26 21:09:57.794: INFO: (10) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:1080/proxy/: ... (200; 3.172119ms) May 26 21:09:57.794: INFO: (10) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 3.534326ms) May 26 21:09:57.795: INFO: (10) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 3.352904ms) May 26 21:09:57.795: INFO: (10) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9/proxy/: test (200; 4.39819ms) May 26 21:09:57.795: INFO: (10) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 4.433026ms) May 26 21:09:57.795: INFO: (10) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 4.453707ms) May 26 21:09:57.796: INFO: (10) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 4.581764ms) May 26 21:09:57.796: INFO: (10) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: test<... (200; 4.54901ms) May 26 21:09:57.796: INFO: (10) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:460/proxy/: tls baz (200; 4.550568ms) May 26 21:09:57.796: INFO: (10) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname2/proxy/: bar (200; 4.743845ms) May 26 21:09:57.796: INFO: (10) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname1/proxy/: tls baz (200; 4.710699ms) May 26 21:09:57.796: INFO: (10) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname1/proxy/: foo (200; 4.762944ms) May 26 21:09:57.796: INFO: (10) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname2/proxy/: tls qux (200; 4.856903ms) May 26 21:09:57.796: INFO: (10) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname1/proxy/: foo (200; 4.970679ms) May 26 21:09:57.796: INFO: (10) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname2/proxy/: bar (200; 4.933071ms) May 26 21:09:57.800: INFO: (11) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname2/proxy/: bar (200; 3.758693ms) May 26 21:09:57.800: INFO: (11) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:1080/proxy/: ... (200; 4.289411ms) May 26 21:09:57.800: INFO: (11) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname2/proxy/: bar (200; 4.290408ms) May 26 21:09:57.801: INFO: (11) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 4.509211ms) May 26 21:09:57.801: INFO: (11) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname1/proxy/: foo (200; 4.317968ms) May 26 21:09:57.801: INFO: (11) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9/proxy/: test (200; 4.406421ms) May 26 21:09:57.801: INFO: (11) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname1/proxy/: tls baz (200; 4.544218ms) May 26 21:09:57.801: INFO: (11) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 4.507312ms) May 26 21:09:57.801: INFO: (11) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname1/proxy/: foo (200; 4.709809ms) May 26 21:09:57.801: INFO: (11) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 4.826644ms) May 26 21:09:57.801: INFO: (11) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: test<... (200; 4.847008ms) May 26 21:09:57.801: INFO: (11) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname2/proxy/: tls qux (200; 4.80767ms) May 26 21:09:57.801: INFO: (11) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 4.847841ms) May 26 21:09:57.801: INFO: (11) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:460/proxy/: tls baz (200; 4.997916ms) May 26 21:09:57.801: INFO: (11) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 4.967023ms) May 26 21:09:57.803: INFO: (12) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:1080/proxy/: test<... (200; 1.855374ms) May 26 21:09:57.804: INFO: (12) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 2.560705ms) May 26 21:09:57.804: INFO: (12) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: test (200; 3.74571ms) May 26 21:09:57.805: INFO: (12) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname1/proxy/: foo (200; 3.686479ms) May 26 21:09:57.805: INFO: (12) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname2/proxy/: tls qux (200; 3.982495ms) May 26 21:09:57.805: INFO: (12) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 3.974775ms) May 26 21:09:57.805: INFO: (12) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname2/proxy/: bar (200; 4.136034ms) May 26 21:09:57.805: INFO: (12) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname1/proxy/: tls baz (200; 4.162001ms) May 26 21:09:57.806: INFO: (12) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:1080/proxy/: ... (200; 4.660503ms) May 26 21:09:57.806: INFO: (12) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname2/proxy/: bar (200; 4.666313ms) May 26 21:09:57.808: INFO: (13) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 2.20431ms) May 26 21:09:57.809: INFO: (13) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:460/proxy/: tls baz (200; 3.119797ms) May 26 21:09:57.809: INFO: (13) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:1080/proxy/: test<... (200; 3.215112ms) May 26 21:09:57.809: INFO: (13) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 3.154839ms) May 26 21:09:57.809: INFO: (13) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9/proxy/: test (200; 3.21955ms) May 26 21:09:57.809: INFO: (13) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:1080/proxy/: ... (200; 3.226401ms) May 26 21:09:57.810: INFO: (13) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname1/proxy/: tls baz (200; 3.626274ms) May 26 21:09:57.810: INFO: (13) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname2/proxy/: bar (200; 3.949999ms) May 26 21:09:57.810: INFO: (13) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname2/proxy/: tls qux (200; 4.042987ms) May 26 21:09:57.810: INFO: (13) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname1/proxy/: foo (200; 3.985974ms) May 26 21:09:57.810: INFO: (13) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 4.035206ms) May 26 21:09:57.810: INFO: (13) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: ... (200; 1.947083ms) May 26 21:09:57.814: INFO: (14) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:460/proxy/: tls baz (200; 3.358763ms) May 26 21:09:57.814: INFO: (14) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: test<... (200; 3.431772ms) May 26 21:09:57.814: INFO: (14) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 3.52105ms) May 26 21:09:57.814: INFO: (14) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9/proxy/: test (200; 3.705657ms) May 26 21:09:57.814: INFO: (14) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 3.807718ms) May 26 21:09:57.814: INFO: (14) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 3.849143ms) May 26 21:09:57.814: INFO: (14) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 4.007975ms) May 26 21:09:57.814: INFO: (14) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 4.012153ms) May 26 21:09:57.815: INFO: (14) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname1/proxy/: tls baz (200; 4.290745ms) May 26 21:09:57.815: INFO: (14) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname2/proxy/: bar (200; 4.317337ms) May 26 21:09:57.815: INFO: (14) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname1/proxy/: foo (200; 4.514203ms) May 26 21:09:57.815: INFO: (14) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname2/proxy/: bar (200; 4.419225ms) May 26 21:09:57.815: INFO: (14) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname1/proxy/: foo (200; 4.410186ms) May 26 21:09:57.815: INFO: (14) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname2/proxy/: tls qux (200; 4.647101ms) May 26 21:09:57.820: INFO: (15) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:1080/proxy/: test<... (200; 5.092122ms) May 26 21:09:57.820: INFO: (15) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 5.19398ms) May 26 21:09:57.820: INFO: (15) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 5.21645ms) May 26 21:09:57.820: INFO: (15) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:1080/proxy/: ... (200; 5.207434ms) May 26 21:09:57.820: INFO: (15) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 5.05438ms) May 26 21:09:57.820: INFO: (15) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 5.148934ms) May 26 21:09:57.820: INFO: (15) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9/proxy/: test (200; 5.270819ms) May 26 21:09:57.820: INFO: (15) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:460/proxy/: tls baz (200; 5.17592ms) May 26 21:09:57.820: INFO: (15) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: ... (200; 2.79947ms) May 26 21:09:57.825: INFO: (16) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname2/proxy/: tls qux (200; 3.022721ms) May 26 21:09:57.825: INFO: (16) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 3.23268ms) May 26 21:09:57.825: INFO: (16) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9/proxy/: test (200; 3.22318ms) May 26 21:09:57.825: INFO: (16) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:1080/proxy/: test<... (200; 3.254896ms) May 26 21:09:57.825: INFO: (16) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 3.42384ms) May 26 21:09:57.825: INFO: (16) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 3.440786ms) May 26 21:09:57.825: INFO: (16) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 3.427085ms) May 26 21:09:57.825: INFO: (16) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 3.584027ms) May 26 21:09:57.825: INFO: (16) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:460/proxy/: tls baz (200; 3.653991ms) May 26 21:09:57.826: INFO: (16) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname2/proxy/: bar (200; 4.660794ms) May 26 21:09:57.826: INFO: (16) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname1/proxy/: foo (200; 4.700104ms) May 26 21:09:57.827: INFO: (16) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname1/proxy/: tls baz (200; 4.781408ms) May 26 21:09:57.827: INFO: (16) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname2/proxy/: bar (200; 4.728866ms) May 26 21:09:57.827: INFO: (16) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname1/proxy/: foo (200; 4.775139ms) May 26 21:09:57.829: INFO: (17) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 2.40113ms) May 26 21:09:57.830: INFO: (17) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:1080/proxy/: ... (200; 3.270764ms) May 26 21:09:57.830: INFO: (17) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9/proxy/: test (200; 3.318671ms) May 26 21:09:57.830: INFO: (17) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 3.289342ms) May 26 21:09:57.830: INFO: (17) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 3.117661ms) May 26 21:09:57.830: INFO: (17) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 3.474934ms) May 26 21:09:57.830: INFO: (17) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 3.584703ms) May 26 21:09:57.830: INFO: (17) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:1080/proxy/: test<... (200; 3.427505ms) May 26 21:09:57.830: INFO: (17) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:460/proxy/: tls baz (200; 3.641556ms) May 26 21:09:57.830: INFO: (17) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: test<... (200; 4.016303ms) May 26 21:09:57.836: INFO: (18) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9/proxy/: test (200; 4.082845ms) May 26 21:09:57.836: INFO: (18) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname2/proxy/: tls qux (200; 4.037756ms) May 26 21:09:57.836: INFO: (18) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname1/proxy/: foo (200; 4.075508ms) May 26 21:09:57.836: INFO: (18) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname2/proxy/: bar (200; 4.123965ms) May 26 21:09:57.836: INFO: (18) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname1/proxy/: tls baz (200; 4.11329ms) May 26 21:09:57.836: INFO: (18) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:1080/proxy/: ... (200; 4.142856ms) May 26 21:09:57.836: INFO: (18) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname1/proxy/: foo (200; 4.070088ms) May 26 21:09:57.836: INFO: (18) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 4.190445ms) May 26 21:09:57.836: INFO: (18) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:460/proxy/: tls baz (200; 4.088335ms) May 26 21:09:57.838: INFO: (19) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 2.589128ms) May 26 21:09:57.839: INFO: (19) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:460/proxy/: tls baz (200; 2.469617ms) May 26 21:09:57.839: INFO: (19) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 2.552687ms) May 26 21:09:57.839: INFO: (19) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:1080/proxy/: ... (200; 2.527608ms) May 26 21:09:57.840: INFO: (19) /api/v1/namespaces/proxy-4212/pods/http:proxy-service-6tqgw-2hct9:162/proxy/: bar (200; 4.42734ms) May 26 21:09:57.840: INFO: (19) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:443/proxy/: test<... (200; 4.413478ms) May 26 21:09:57.840: INFO: (19) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname1/proxy/: foo (200; 4.490817ms) May 26 21:09:57.840: INFO: (19) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname1/proxy/: foo (200; 4.485317ms) May 26 21:09:57.840: INFO: (19) /api/v1/namespaces/proxy-4212/services/proxy-service-6tqgw:portname2/proxy/: bar (200; 4.399534ms) May 26 21:09:57.840: INFO: (19) /api/v1/namespaces/proxy-4212/services/http:proxy-service-6tqgw:portname2/proxy/: bar (200; 4.41711ms) May 26 21:09:57.840: INFO: (19) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname1/proxy/: tls baz (200; 4.516592ms) May 26 21:09:57.840: INFO: (19) /api/v1/namespaces/proxy-4212/pods/https:proxy-service-6tqgw-2hct9:462/proxy/: tls qux (200; 4.491209ms) May 26 21:09:57.840: INFO: (19) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9/proxy/: test (200; 4.453673ms) May 26 21:09:57.840: INFO: (19) /api/v1/namespaces/proxy-4212/services/https:proxy-service-6tqgw:tlsportname2/proxy/: tls qux (200; 4.514307ms) May 26 21:09:57.840: INFO: (19) /api/v1/namespaces/proxy-4212/pods/proxy-service-6tqgw-2hct9:160/proxy/: foo (200; 4.577231ms) STEP: deleting ReplicationController proxy-service-6tqgw in namespace proxy-4212, will wait for the garbage collector to delete the pods May 26 21:09:57.900: INFO: Deleting ReplicationController proxy-service-6tqgw took: 6.957242ms May 26 21:09:58.200: INFO: Terminating ReplicationController proxy-service-6tqgw pods took: 300.265456ms [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:10:09.300: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-4212" for this suite. • [SLOW TEST:15.861 seconds] [sig-network] Proxy /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/proxy.go:57 should proxy through a service and a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Proxy version v1 should proxy through a service and a pod [Conformance]","total":278,"completed":5,"skipped":110,"failed":0} [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:10:09.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:10:13.408: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-4145" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox Pod with hostAliases should write entries to /etc/hosts [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":6,"skipped":110,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:10:13.416: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] creating/deleting custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:10:13.497: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:10:14.586: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-6611" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition creating/deleting custom resource definition objects works [Conformance]","total":278,"completed":7,"skipped":117,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:10:14.599: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1275 STEP: creating the pod May 26 21:10:14.695: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-5640' May 26 21:10:15.070: INFO: stderr: "" May 26 21:10:15.070: INFO: stdout: "pod/pause created\n" May 26 21:10:15.070: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [pause] May 26 21:10:15.070: INFO: Waiting up to 5m0s for pod "pause" in namespace "kubectl-5640" to be "running and ready" May 26 21:10:15.076: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 6.462278ms May 26 21:10:17.080: INFO: Pod "pause": Phase="Pending", Reason="", readiness=false. Elapsed: 2.010423616s May 26 21:10:19.085: INFO: Pod "pause": Phase="Running", Reason="", readiness=true. Elapsed: 4.015195715s May 26 21:10:19.085: INFO: Pod "pause" satisfied condition "running and ready" May 26 21:10:19.085: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [pause] [It] should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: adding the label testing-label with value testing-label-value to a pod May 26 21:10:19.085: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label=testing-label-value --namespace=kubectl-5640' May 26 21:10:19.193: INFO: stderr: "" May 26 21:10:19.193: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod has the label testing-label with the value testing-label-value May 26 21:10:19.193: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5640' May 26 21:10:19.292: INFO: stderr: "" May 26 21:10:19.292: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s testing-label-value\n" STEP: removing the label testing-label of a pod May 26 21:10:19.293: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config label pods pause testing-label- --namespace=kubectl-5640' May 26 21:10:19.443: INFO: stderr: "" May 26 21:10:19.443: INFO: stdout: "pod/pause labeled\n" STEP: verifying the pod doesn't have the label testing-label May 26 21:10:19.444: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod pause -L testing-label --namespace=kubectl-5640' May 26 21:10:19.560: INFO: stderr: "" May 26 21:10:19.560: INFO: stdout: "NAME READY STATUS RESTARTS AGE TESTING-LABEL\npause 1/1 Running 0 4s \n" [AfterEach] Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1282 STEP: using delete to clean up resources May 26 21:10:19.561: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-5640' May 26 21:10:19.725: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 21:10:19.725: INFO: stdout: "pod \"pause\" force deleted\n" May 26 21:10:19.725: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=pause --no-headers --namespace=kubectl-5640' May 26 21:10:19.967: INFO: stderr: "No resources found in kubectl-5640 namespace.\n" May 26 21:10:19.967: INFO: stdout: "" May 26 21:10:19.967: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=pause --namespace=kubectl-5640 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 26 21:10:20.130: INFO: stderr: "" May 26 21:10:20.130: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:10:20.130: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-5640" for this suite. • [SLOW TEST:5.538 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl label /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1272 should update the label on a resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl label should update the label on a resource [Conformance]","total":278,"completed":8,"skipped":182,"failed":0} SSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:10:20.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-cf1212a4-6cab-477a-bdb2-719d1c7db306 STEP: Creating a pod to test consume secrets May 26 21:10:20.265: INFO: Waiting up to 5m0s for pod "pod-secrets-a6290404-6f02-40e3-a2e6-46d1ef937ea5" in namespace "secrets-9256" to be "success or failure" May 26 21:10:20.297: INFO: Pod "pod-secrets-a6290404-6f02-40e3-a2e6-46d1ef937ea5": Phase="Pending", Reason="", readiness=false. Elapsed: 32.343164ms May 26 21:10:23.703: INFO: Pod "pod-secrets-a6290404-6f02-40e3-a2e6-46d1ef937ea5": Phase="Pending", Reason="", readiness=false. Elapsed: 3.438642366s May 26 21:10:25.708: INFO: Pod "pod-secrets-a6290404-6f02-40e3-a2e6-46d1ef937ea5": Phase="Running", Reason="", readiness=true. Elapsed: 5.44310573s May 26 21:10:27.721: INFO: Pod "pod-secrets-a6290404-6f02-40e3-a2e6-46d1ef937ea5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 7.456208199s STEP: Saw pod success May 26 21:10:27.721: INFO: Pod "pod-secrets-a6290404-6f02-40e3-a2e6-46d1ef937ea5" satisfied condition "success or failure" May 26 21:10:27.723: INFO: Trying to get logs from node jerma-worker pod pod-secrets-a6290404-6f02-40e3-a2e6-46d1ef937ea5 container secret-volume-test: STEP: delete the pod May 26 21:10:27.755: INFO: Waiting for pod pod-secrets-a6290404-6f02-40e3-a2e6-46d1ef937ea5 to disappear May 26 21:10:27.772: INFO: Pod pod-secrets-a6290404-6f02-40e3-a2e6-46d1ef937ea5 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:10:27.772: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9256" for this suite. • [SLOW TEST:7.642 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":9,"skipped":188,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:10:27.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's command [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's command May 26 21:10:27.942: INFO: Waiting up to 5m0s for pod "var-expansion-2bc73615-a0aa-4c5f-ae7f-1014d9011b99" in namespace "var-expansion-933" to be "success or failure" May 26 21:10:27.951: INFO: Pod "var-expansion-2bc73615-a0aa-4c5f-ae7f-1014d9011b99": Phase="Pending", Reason="", readiness=false. Elapsed: 8.663348ms May 26 21:10:29.956: INFO: Pod "var-expansion-2bc73615-a0aa-4c5f-ae7f-1014d9011b99": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013201278s May 26 21:10:31.960: INFO: Pod "var-expansion-2bc73615-a0aa-4c5f-ae7f-1014d9011b99": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.017854843s STEP: Saw pod success May 26 21:10:31.960: INFO: Pod "var-expansion-2bc73615-a0aa-4c5f-ae7f-1014d9011b99" satisfied condition "success or failure" May 26 21:10:31.964: INFO: Trying to get logs from node jerma-worker pod var-expansion-2bc73615-a0aa-4c5f-ae7f-1014d9011b99 container dapi-container: STEP: delete the pod May 26 21:10:32.007: INFO: Waiting for pod var-expansion-2bc73615-a0aa-4c5f-ae7f-1014d9011b99 to disappear May 26 21:10:32.062: INFO: Pod var-expansion-2bc73615-a0aa-4c5f-ae7f-1014d9011b99 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:10:32.062: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-933" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's command [NodeConformance] [Conformance]","total":278,"completed":10,"skipped":189,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:10:32.071: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 26 21:10:36.826: INFO: Successfully updated pod "annotationupdatec5b7373c-0d5f-468b-b073-704a01f50b44" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:10:40.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6077" for this suite. • [SLOW TEST:8.853 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":11,"skipped":195,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:10:40.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:10:40.985: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 26 21:10:43.908: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9044 create -f -' May 26 21:10:49.307: INFO: stderr: "" May 26 21:10:49.307: INFO: stdout: "e2e-test-crd-publish-openapi-8465-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 26 21:10:49.307: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9044 delete e2e-test-crd-publish-openapi-8465-crds test-cr' May 26 21:10:49.521: INFO: stderr: "" May 26 21:10:49.521: INFO: stdout: "e2e-test-crd-publish-openapi-8465-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" May 26 21:10:49.521: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9044 apply -f -' May 26 21:10:49.795: INFO: stderr: "" May 26 21:10:49.795: INFO: stdout: "e2e-test-crd-publish-openapi-8465-crd.crd-publish-openapi-test-unknown-at-root.example.com/test-cr created\n" May 26 21:10:49.795: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-9044 delete e2e-test-crd-publish-openapi-8465-crds test-cr' May 26 21:10:49.909: INFO: stderr: "" May 26 21:10:49.909: INFO: stdout: "e2e-test-crd-publish-openapi-8465-crd.crd-publish-openapi-test-unknown-at-root.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 26 21:10:49.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-8465-crds' May 26 21:10:50.164: INFO: stderr: "" May 26 21:10:50.164: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-8465-crd\nVERSION: crd-publish-openapi-test-unknown-at-root.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:10:53.622: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-9044" for this suite. • [SLOW TEST:12.707 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields at the schema root [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields at the schema root [Conformance]","total":278,"completed":12,"skipped":214,"failed":0} S ------------------------------ [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:10:53.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to restart watching from the last resource version observed by the previous watch [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps STEP: creating a new configmap STEP: modifying the configmap once STEP: closing the watch once it receives two notifications May 26 21:10:53.728: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6895 /api/v1/namespaces/watch-6895/configmaps/e2e-watch-test-watch-closed 26ed440b-2f19-4e5e-902e-182b39f2be8f 19371267 0 2020-05-26 21:10:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 26 21:10:53.728: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6895 /api/v1/namespaces/watch-6895/configmaps/e2e-watch-test-watch-closed 26ed440b-2f19-4e5e-902e-182b39f2be8f 19371268 0 2020-05-26 21:10:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time, while the watch is closed STEP: creating a new watch on configmaps from the last resource version observed by the first watch STEP: deleting the configmap STEP: Expecting to observe notifications for all changes to the configmap since the first watch closed May 26 21:10:53.756: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6895 /api/v1/namespaces/watch-6895/configmaps/e2e-watch-test-watch-closed 26ed440b-2f19-4e5e-902e-182b39f2be8f 19371269 0 2020-05-26 21:10:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 26 21:10:53.756: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-watch-closed watch-6895 /api/v1/namespaces/watch-6895/configmaps/e2e-watch-test-watch-closed 26ed440b-2f19-4e5e-902e-182b39f2be8f 19371270 0 2020-05-26 21:10:53 +0000 UTC map[watch-this-configmap:watch-closed-and-restarted] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:10:53.757: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-6895" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to restart watching from the last resource version observed by the previous watch [Conformance]","total":278,"completed":13,"skipped":215,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:10:53.764: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:10:53.846: INFO: Waiting up to 5m0s for pod "busybox-readonly-false-60f5b7d6-6890-4f84-a7e3-bfb213949999" in namespace "security-context-test-77" to be "success or failure" May 26 21:10:53.850: INFO: Pod "busybox-readonly-false-60f5b7d6-6890-4f84-a7e3-bfb213949999": Phase="Pending", Reason="", readiness=false. Elapsed: 4.005113ms May 26 21:10:55.913: INFO: Pod "busybox-readonly-false-60f5b7d6-6890-4f84-a7e3-bfb213949999": Phase="Pending", Reason="", readiness=false. Elapsed: 2.067153751s May 26 21:10:57.932: INFO: Pod "busybox-readonly-false-60f5b7d6-6890-4f84-a7e3-bfb213949999": Phase="Pending", Reason="", readiness=false. Elapsed: 4.08611169s May 26 21:10:59.936: INFO: Pod "busybox-readonly-false-60f5b7d6-6890-4f84-a7e3-bfb213949999": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.090187508s May 26 21:10:59.936: INFO: Pod "busybox-readonly-false-60f5b7d6-6890-4f84-a7e3-bfb213949999" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:10:59.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-77" for this suite. • [SLOW TEST:6.180 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with readOnlyRootFilesystem /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:164 should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with readOnlyRootFilesystem should run the container with writable rootfs when readOnlyRootFilesystem=false [NodeConformance] [Conformance]","total":278,"completed":14,"skipped":227,"failed":0} SSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:10:59.945: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 21:11:00.551: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 21:11:02.563: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726124260, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726124260, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726124260, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726124260, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 21:11:05.603: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that should be mutated STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that should not be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:11:06.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4390" for this suite. STEP: Destroying namespace "webhook-4390-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.614 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing mutating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing mutating webhooks should work [Conformance]","total":278,"completed":15,"skipped":231,"failed":0} SSSSS ------------------------------ [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:11:06.559: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:11:30.720: INFO: Container started at 2020-05-26 21:11:09 +0000 UTC, pod became ready at 2020-05-26 21:11:29 +0000 UTC [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:11:30.720: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-3514" for this suite. • [SLOW TEST:24.170 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe should not be ready before initial delay and never restart [NodeConformance] [Conformance]","total":278,"completed":16,"skipped":236,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:11:30.729: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-1e2fd5df-3594-4a5f-a29c-4ab6e3cc0e01 STEP: Creating a pod to test consume configMaps May 26 21:11:30.824: INFO: Waiting up to 5m0s for pod "pod-configmaps-cb59ee20-7e84-43e0-aaee-2436f16026eb" in namespace "configmap-29" to be "success or failure" May 26 21:11:30.828: INFO: Pod "pod-configmaps-cb59ee20-7e84-43e0-aaee-2436f16026eb": Phase="Pending", Reason="", readiness=false. Elapsed: 3.680246ms May 26 21:11:32.833: INFO: Pod "pod-configmaps-cb59ee20-7e84-43e0-aaee-2436f16026eb": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008315908s May 26 21:11:34.837: INFO: Pod "pod-configmaps-cb59ee20-7e84-43e0-aaee-2436f16026eb": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012582924s STEP: Saw pod success May 26 21:11:34.837: INFO: Pod "pod-configmaps-cb59ee20-7e84-43e0-aaee-2436f16026eb" satisfied condition "success or failure" May 26 21:11:34.839: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-cb59ee20-7e84-43e0-aaee-2436f16026eb container configmap-volume-test: STEP: delete the pod May 26 21:11:34.898: INFO: Waiting for pod pod-configmaps-cb59ee20-7e84-43e0-aaee-2436f16026eb to disappear May 26 21:11:34.991: INFO: Pod pod-configmaps-cb59ee20-7e84-43e0-aaee-2436f16026eb no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:11:34.991: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-29" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":17,"skipped":272,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:11:34.999: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-4e10ddf8-130b-4d58-a4ec-8dd2e7ff53a1 STEP: Creating a pod to test consume configMaps May 26 21:11:35.069: INFO: Waiting up to 5m0s for pod "pod-configmaps-26941415-59e7-4f52-932d-4d96867c8be8" in namespace "configmap-5797" to be "success or failure" May 26 21:11:35.073: INFO: Pod "pod-configmaps-26941415-59e7-4f52-932d-4d96867c8be8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.09356ms May 26 21:11:37.304: INFO: Pod "pod-configmaps-26941415-59e7-4f52-932d-4d96867c8be8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.234803484s May 26 21:11:39.308: INFO: Pod "pod-configmaps-26941415-59e7-4f52-932d-4d96867c8be8": Phase="Running", Reason="", readiness=true. Elapsed: 4.239144086s May 26 21:11:41.313: INFO: Pod "pod-configmaps-26941415-59e7-4f52-932d-4d96867c8be8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.244025676s STEP: Saw pod success May 26 21:11:41.313: INFO: Pod "pod-configmaps-26941415-59e7-4f52-932d-4d96867c8be8" satisfied condition "success or failure" May 26 21:11:41.316: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-26941415-59e7-4f52-932d-4d96867c8be8 container configmap-volume-test: STEP: delete the pod May 26 21:11:41.356: INFO: Waiting for pod pod-configmaps-26941415-59e7-4f52-932d-4d96867c8be8 to disappear May 26 21:11:41.367: INFO: Pod pod-configmaps-26941415-59e7-4f52-932d-4d96867c8be8 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:11:41.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-5797" for this suite. • [SLOW TEST:6.374 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":18,"skipped":285,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:11:41.374: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: delete a job STEP: deleting Job.batch foo in namespace job-7926, will wait for the garbage collector to delete the pods May 26 21:11:47.575: INFO: Deleting Job.batch foo took: 6.445978ms May 26 21:11:47.875: INFO: Terminating Job.batch foo pods took: 300.326135ms STEP: Ensuring job was deleted [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:12:29.579: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7926" for this suite. • [SLOW TEST:48.213 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should delete a job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should delete a job [Conformance]","total":278,"completed":19,"skipped":313,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:12:29.588: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with configMap that has name projected-configmap-test-upd-5dd38197-fb98-40bf-987d-42420151f605 STEP: Creating the pod STEP: Updating configmap projected-configmap-test-upd-5dd38197-fb98-40bf-987d-42420151f605 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:12:35.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9618" for this suite. • [SLOW TEST:6.160 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":20,"skipped":320,"failed":0} SSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:12:35.748: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 26 21:12:35.806: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 26 21:12:35.818: INFO: Waiting for terminating namespaces to be deleted... May 26 21:12:35.821: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 26 21:12:35.826: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 21:12:35.826: INFO: Container kindnet-cni ready: true, restart count 2 May 26 21:12:35.826: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 21:12:35.826: INFO: Container kube-proxy ready: true, restart count 0 May 26 21:12:35.826: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 26 21:12:35.832: INFO: pod-projected-configmaps-bb8e8ae2-c093-4429-9ce6-753907805a2e from projected-9618 started at 2020-05-26 21:12:29 +0000 UTC (1 container statuses recorded) May 26 21:12:35.832: INFO: Container projected-configmap-volume-test ready: true, restart count 0 May 26 21:12:35.832: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 21:12:35.832: INFO: Container kindnet-cni ready: true, restart count 2 May 26 21:12:35.832: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 26 21:12:35.832: INFO: Container kube-bench ready: false, restart count 0 May 26 21:12:35.832: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 21:12:35.832: INFO: Container kube-proxy ready: true, restart count 0 May 26 21:12:35.832: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 26 21:12:35.832: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-a4b68343-984b-4dae-92c1-fea605508a21 42 STEP: Trying to relaunch the pod, now with labels. STEP: removing the label kubernetes.io/e2e-a4b68343-984b-4dae-92c1-fea605508a21 off the node jerma-worker STEP: verifying the node doesn't have the label kubernetes.io/e2e-a4b68343-984b-4dae-92c1-fea605508a21 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:12:44.061: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-5982" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:8.348 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that NodeSelector is respected if matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if matching [Conformance]","total":278,"completed":21,"skipped":323,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:12:44.097: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 26 21:12:44.205: INFO: Waiting up to 5m0s for pod "pod-51e0d931-9a25-45c0-bde3-5496ddecad3f" in namespace "emptydir-3199" to be "success or failure" May 26 21:12:44.212: INFO: Pod "pod-51e0d931-9a25-45c0-bde3-5496ddecad3f": Phase="Pending", Reason="", readiness=false. Elapsed: 7.499012ms May 26 21:12:46.286: INFO: Pod "pod-51e0d931-9a25-45c0-bde3-5496ddecad3f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081306545s May 26 21:12:48.292: INFO: Pod "pod-51e0d931-9a25-45c0-bde3-5496ddecad3f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.086967852s STEP: Saw pod success May 26 21:12:48.292: INFO: Pod "pod-51e0d931-9a25-45c0-bde3-5496ddecad3f" satisfied condition "success or failure" May 26 21:12:48.296: INFO: Trying to get logs from node jerma-worker pod pod-51e0d931-9a25-45c0-bde3-5496ddecad3f container test-container: STEP: delete the pod May 26 21:12:48.316: INFO: Waiting for pod pod-51e0d931-9a25-45c0-bde3-5496ddecad3f to disappear May 26 21:12:48.320: INFO: Pod pod-51e0d931-9a25-45c0-bde3-5496ddecad3f no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:12:48.320: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3199" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":22,"skipped":326,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:12:48.327: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-7192 STEP: creating a selector STEP: Creating the service pods in kubernetes May 26 21:12:48.454: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 26 21:13:12.718: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.66:8080/dial?request=hostname&protocol=udp&host=10.244.1.11&port=8081&tries=1'] Namespace:pod-network-test-7192 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:13:12.718: INFO: >>> kubeConfig: /root/.kube/config I0526 21:13:12.749700 6 log.go:172] (0xc001af4420) (0xc00274c780) Create stream I0526 21:13:12.749730 6 log.go:172] (0xc001af4420) (0xc00274c780) Stream added, broadcasting: 1 I0526 21:13:12.751728 6 log.go:172] (0xc001af4420) Reply frame received for 1 I0526 21:13:12.751758 6 log.go:172] (0xc001af4420) (0xc00274c820) Create stream I0526 21:13:12.751769 6 log.go:172] (0xc001af4420) (0xc00274c820) Stream added, broadcasting: 3 I0526 21:13:12.752639 6 log.go:172] (0xc001af4420) Reply frame received for 3 I0526 21:13:12.752675 6 log.go:172] (0xc001af4420) (0xc001efcbe0) Create stream I0526 21:13:12.752686 6 log.go:172] (0xc001af4420) (0xc001efcbe0) Stream added, broadcasting: 5 I0526 21:13:12.753614 6 log.go:172] (0xc001af4420) Reply frame received for 5 I0526 21:13:13.014672 6 log.go:172] (0xc001af4420) Data frame received for 3 I0526 21:13:13.014699 6 log.go:172] (0xc00274c820) (3) Data frame handling I0526 21:13:13.014718 6 log.go:172] (0xc00274c820) (3) Data frame sent I0526 21:13:13.015010 6 log.go:172] (0xc001af4420) Data frame received for 3 I0526 21:13:13.015043 6 log.go:172] (0xc00274c820) (3) Data frame handling I0526 21:13:13.015174 6 log.go:172] (0xc001af4420) Data frame received for 5 I0526 21:13:13.015187 6 log.go:172] (0xc001efcbe0) (5) Data frame handling I0526 21:13:13.017149 6 log.go:172] (0xc001af4420) Data frame received for 1 I0526 21:13:13.017165 6 log.go:172] (0xc00274c780) (1) Data frame handling I0526 21:13:13.017174 6 log.go:172] (0xc00274c780) (1) Data frame sent I0526 21:13:13.017207 6 log.go:172] (0xc001af4420) (0xc00274c780) Stream removed, broadcasting: 1 I0526 21:13:13.017513 6 log.go:172] (0xc001af4420) (0xc00274c780) Stream removed, broadcasting: 1 I0526 21:13:13.017530 6 log.go:172] (0xc001af4420) (0xc00274c820) Stream removed, broadcasting: 3 I0526 21:13:13.017570 6 log.go:172] (0xc001af4420) Go away received I0526 21:13:13.017624 6 log.go:172] (0xc001af4420) (0xc001efcbe0) Stream removed, broadcasting: 5 May 26 21:13:13.017: INFO: Waiting for responses: map[] May 26 21:13:13.021: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.66:8080/dial?request=hostname&protocol=udp&host=10.244.2.65&port=8081&tries=1'] Namespace:pod-network-test-7192 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:13:13.021: INFO: >>> kubeConfig: /root/.kube/config I0526 21:13:13.052833 6 log.go:172] (0xc0021f1e40) (0xc0022fc5a0) Create stream I0526 21:13:13.052863 6 log.go:172] (0xc0021f1e40) (0xc0022fc5a0) Stream added, broadcasting: 1 I0526 21:13:13.055360 6 log.go:172] (0xc0021f1e40) Reply frame received for 1 I0526 21:13:13.055404 6 log.go:172] (0xc0021f1e40) (0xc001e2e8c0) Create stream I0526 21:13:13.055411 6 log.go:172] (0xc0021f1e40) (0xc001e2e8c0) Stream added, broadcasting: 3 I0526 21:13:13.056315 6 log.go:172] (0xc0021f1e40) Reply frame received for 3 I0526 21:13:13.056358 6 log.go:172] (0xc0021f1e40) (0xc001e2e960) Create stream I0526 21:13:13.056371 6 log.go:172] (0xc0021f1e40) (0xc001e2e960) Stream added, broadcasting: 5 I0526 21:13:13.057570 6 log.go:172] (0xc0021f1e40) Reply frame received for 5 I0526 21:13:13.132304 6 log.go:172] (0xc0021f1e40) Data frame received for 3 I0526 21:13:13.132341 6 log.go:172] (0xc001e2e8c0) (3) Data frame handling I0526 21:13:13.132359 6 log.go:172] (0xc001e2e8c0) (3) Data frame sent I0526 21:13:13.133424 6 log.go:172] (0xc0021f1e40) Data frame received for 3 I0526 21:13:13.133462 6 log.go:172] (0xc001e2e8c0) (3) Data frame handling I0526 21:13:13.133486 6 log.go:172] (0xc0021f1e40) Data frame received for 5 I0526 21:13:13.133500 6 log.go:172] (0xc001e2e960) (5) Data frame handling I0526 21:13:13.135403 6 log.go:172] (0xc0021f1e40) Data frame received for 1 I0526 21:13:13.135433 6 log.go:172] (0xc0022fc5a0) (1) Data frame handling I0526 21:13:13.135457 6 log.go:172] (0xc0022fc5a0) (1) Data frame sent I0526 21:13:13.135471 6 log.go:172] (0xc0021f1e40) (0xc0022fc5a0) Stream removed, broadcasting: 1 I0526 21:13:13.135517 6 log.go:172] (0xc0021f1e40) Go away received I0526 21:13:13.135597 6 log.go:172] (0xc0021f1e40) (0xc0022fc5a0) Stream removed, broadcasting: 1 I0526 21:13:13.135619 6 log.go:172] (0xc0021f1e40) (0xc001e2e8c0) Stream removed, broadcasting: 3 I0526 21:13:13.135632 6 log.go:172] (0xc0021f1e40) (0xc001e2e960) Stream removed, broadcasting: 5 May 26 21:13:13.135: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:13:13.135: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-7192" for this suite. • [SLOW TEST:24.818 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":23,"skipped":359,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:13:13.146: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:13:13.222: INFO: Creating deployment "webserver-deployment" May 26 21:13:13.225: INFO: Waiting for observed generation 1 May 26 21:13:15.254: INFO: Waiting for all required pods to come up May 26 21:13:15.259: INFO: Pod name httpd: Found 10 pods out of 10 STEP: ensuring each pod is running May 26 21:13:29.269: INFO: Waiting for deployment "webserver-deployment" to complete May 26 21:13:29.275: INFO: Updating deployment "webserver-deployment" with a non-existent image May 26 21:13:29.280: INFO: Updating deployment webserver-deployment May 26 21:13:29.280: INFO: Waiting for observed generation 2 May 26 21:13:31.533: INFO: Waiting for the first rollout's replicaset to have .status.availableReplicas = 8 May 26 21:13:31.543: INFO: Waiting for the first rollout's replicaset to have .spec.replicas = 8 May 26 21:13:31.546: INFO: Waiting for the first rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 26 21:13:31.554: INFO: Verifying that the second rollout's replicaset has .status.availableReplicas = 0 May 26 21:13:31.554: INFO: Waiting for the second rollout's replicaset to have .spec.replicas = 5 May 26 21:13:31.556: INFO: Waiting for the second rollout's replicaset of deployment "webserver-deployment" to have desired number of replicas May 26 21:13:31.560: INFO: Verifying that deployment "webserver-deployment" has minimum required number of available replicas May 26 21:13:31.560: INFO: Scaling up the deployment "webserver-deployment" from 10 to 30 May 26 21:13:31.565: INFO: Updating deployment webserver-deployment May 26 21:13:31.565: INFO: Waiting for the replicasets of deployment "webserver-deployment" to have desired number of replicas May 26 21:13:31.827: INFO: Verifying that first rollout's replicaset has .spec.replicas = 20 May 26 21:13:31.910: INFO: Verifying that second rollout's replicaset has .spec.replicas = 13 [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 26 21:13:32.198: INFO: Deployment "webserver-deployment": &Deployment{ObjectMeta:{webserver-deployment deployment-8083 /apis/apps/v1/namespaces/deployment-8083/deployments/webserver-deployment 7e069fa4-c56e-4bc9-b2a4-f72e6f070d77 19372321 3 2020-05-26 21:13:13 +0000 UTC map[name:httpd] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*30,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e94408 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:2,MaxSurge:3,},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:3,Replicas:13,UpdatedReplicas:5,AvailableReplicas:8,UnavailableReplicas:25,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "webserver-deployment-c7997dcc8" is progressing.,LastUpdateTime:2020-05-26 21:13:29 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,},DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-26 21:13:31 +0000 UTC,LastTransitionTime:2020-05-26 21:13:31 +0000 UTC,},},ReadyReplicas:8,CollisionCount:nil,},} May 26 21:13:32.295: INFO: New ReplicaSet "webserver-deployment-c7997dcc8" of Deployment "webserver-deployment": &ReplicaSet{ObjectMeta:{webserver-deployment-c7997dcc8 deployment-8083 /apis/apps/v1/namespaces/deployment-8083/replicasets/webserver-deployment-c7997dcc8 62a0e8fa-71a8-495c-9b73-87b7afc36204 19372385 3 2020-05-26 21:13:29 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment webserver-deployment 7e069fa4-c56e-4bc9-b2a4-f72e6f070d77 0xc002e948d7 0xc002e948d8}] [] []},Spec:ReplicaSetSpec{Replicas:*13,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: c7997dcc8,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [] [] []} {[] [] [{httpd webserver:404 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e94948 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:13,FullyLabeledReplicas:13,ObservedGeneration:3,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 26 21:13:32.295: INFO: All old ReplicaSets of Deployment "webserver-deployment": May 26 21:13:32.295: INFO: &ReplicaSet{ObjectMeta:{webserver-deployment-595b5b9587 deployment-8083 /apis/apps/v1/namespaces/deployment-8083/replicasets/webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 19372359 3 2020-05-26 21:13:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[deployment.kubernetes.io/desired-replicas:30 deployment.kubernetes.io/max-replicas:33 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment webserver-deployment 7e069fa4-c56e-4bc9-b2a4-f72e6f070d77 0xc002e94817 0xc002e94818}] [] []},Spec:ReplicaSetSpec{Replicas:*20,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: httpd,pod-template-hash: 595b5b9587,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002e94878 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:8,FullyLabeledReplicas:8,ObservedGeneration:3,ReadyReplicas:8,AvailableReplicas:8,Conditions:[]ReplicaSetCondition{},},} May 26 21:13:32.361: INFO: Pod "webserver-deployment-595b5b9587-29c7s" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-29c7s webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-29c7s d9d2ed7a-004d-4e31-a3cc-f11ca4b88813 19372173 0 2020-05-26 21:13:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002e94df7 0xc002e94df8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:23 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.12,StartTime:2020-05-26 21:13:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 21:13:21 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://81fe0a21e3a5882fee2635e82da83eb001f38f680d0c1639680adb1c8d2badd4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.12,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.361: INFO: Pod "webserver-deployment-595b5b9587-5cg5t" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-5cg5t webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-5cg5t f07eb0ea-4408-4918-ad94-0b14a9ccc095 19372353 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002e94f77 0xc002e94f78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.361: INFO: Pod "webserver-deployment-595b5b9587-9cx5j" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-9cx5j webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-9cx5j efa0b62c-6634-4f24-88e5-1db229cbde91 19372202 0 2020-05-26 21:13:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002e95097 0xc002e95098}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.13,StartTime:2020-05-26 21:13:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 21:13:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://d4d3058719de9691e4b6b0d3b5bd04012ac4d6a5102304998e3ca1de26ce1213,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.13,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.362: INFO: Pod "webserver-deployment-595b5b9587-b4c28" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-b4c28 webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-b4c28 6501104b-8096-478b-8fb0-eb5ccae973b8 19372233 0 2020-05-26 21:13:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002e95217 0xc002e95218}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.15,StartTime:2020-05-26 21:13:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 21:13:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://5e8e195614b9c1b6878ab5905008a26704811fdc4e55f8a0e664f1ac1dd431dc,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.15,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.362: INFO: Pod "webserver-deployment-595b5b9587-btcbp" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-btcbp webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-btcbp f446f979-b841-46a5-8f6e-7a949066e73f 19372209 0 2020-05-26 21:13:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002e95397 0xc002e95398}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.14,StartTime:2020-05-26 21:13:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 21:13:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://355c525c45b3291279280762bdc3e1d864dcae82f1538e610d9240cf980180b9,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.14,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.362: INFO: Pod "webserver-deployment-595b5b9587-c5v2k" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-c5v2k webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-c5v2k 06d4bcea-af60-4ea0-bc4e-743da637ce2a 19372227 0 2020-05-26 21:13:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002e95517 0xc002e95518}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:27 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.16,StartTime:2020-05-26 21:13:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 21:13:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://e7a0e605b1f617a273b784e9497dc4730f4aae950b433fc076631b21a78b824c,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.16,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.362: INFO: Pod "webserver-deployment-595b5b9587-d4fqq" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-d4fqq webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-d4fqq 19fc779d-7d44-4f17-8c84-eda07ea25607 19372143 0 2020-05-26 21:13:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002e95697 0xc002e95698}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:17 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.67,StartTime:2020-05-26 21:13:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 21:13:16 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://77a2fde2c6621fb81d6f44d773b0689e46b48d236dc31f60af4f1e58725b3f48,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.67,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.363: INFO: Pod "webserver-deployment-595b5b9587-ddzl6" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-ddzl6 webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-ddzl6 049bac33-6b7b-4b11-bb27-1fab7aa0328a 19372348 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002e95817 0xc002e95818}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.363: INFO: Pod "webserver-deployment-595b5b9587-g8trx" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-g8trx webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-g8trx 311fa1bf-dc7f-4cab-ab05-d16f19ca94a3 19372389 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002e95937 0xc002e95938}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-26 21:13:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.363: INFO: Pod "webserver-deployment-595b5b9587-hj6b2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hj6b2 webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-hj6b2 b052fcab-42d8-4524-a1e5-927216bbb7e6 19372334 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002e95a97 0xc002e95a98}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.363: INFO: Pod "webserver-deployment-595b5b9587-hsl2h" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-hsl2h webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-hsl2h 7e8cacc2-4725-450b-a809-51059e4751c5 19372210 0 2020-05-26 21:13:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002e95bb7 0xc002e95bb8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.68,StartTime:2020-05-26 21:13:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 21:13:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://ff3d8250c19dca7ef3b5dfc4444bc0a17f47dc9c3e3aa06507c87f525275e53f,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.68,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.363: INFO: Pod "webserver-deployment-595b5b9587-lpnnv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-lpnnv webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-lpnnv 86f1c77d-2fcd-4b34-837a-397c7be6e14e 19372347 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002e95d37 0xc002e95d38}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.364: INFO: Pod "webserver-deployment-595b5b9587-nrrhv" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-nrrhv webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-nrrhv acd9c8bc-eba0-4317-84ef-1e260308a85d 19372375 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002e95e77 0xc002e95e78}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:31 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:31 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-26 21:13:31 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.364: INFO: Pod "webserver-deployment-595b5b9587-pzx9w" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-pzx9w webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-pzx9w bbbdfddf-ab4a-43ac-9fb9-8e91ece95342 19372360 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002e95fd7 0xc002e95fd8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.364: INFO: Pod "webserver-deployment-595b5b9587-stlgw" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-stlgw webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-stlgw 7f27882f-6788-41c2-a7e5-64c8049b066f 19372349 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002ec00f7 0xc002ec00f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.364: INFO: Pod "webserver-deployment-595b5b9587-v5xhx" is available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v5xhx webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-v5xhx 1ccd57da-f6d0-4457-a99e-4bdceab7c5a9 19372204 0 2020-05-26 21:13:13 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002ec0217 0xc002ec0218}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:26 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:13 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.69,StartTime:2020-05-26 21:13:13 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 21:13:26 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://bf0c6726d1e902a70d881476a417af716b9843ccfea5e0b307d34ccd513cc6f3,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.69,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.364: INFO: Pod "webserver-deployment-595b5b9587-v6d64" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-v6d64 webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-v6d64 35406e1f-e592-45e3-afb6-1dc3c20fd481 19372365 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002ec0397 0xc002ec0398}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.365: INFO: Pod "webserver-deployment-595b5b9587-vvq2k" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-vvq2k webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-vvq2k 2ca0b793-16dd-414a-a7fd-17e11cdedad2 19372329 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002ec04b7 0xc002ec04b8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.365: INFO: Pod "webserver-deployment-595b5b9587-x2cvn" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-x2cvn webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-x2cvn 1f9cb16f-07ce-4d97-9d18-1132995dd3a9 19372357 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002ec05d7 0xc002ec05d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.365: INFO: Pod "webserver-deployment-595b5b9587-z2gk2" is not available: &Pod{ObjectMeta:{webserver-deployment-595b5b9587-z2gk2 webserver-deployment-595b5b9587- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-595b5b9587-z2gk2 664bbb7a-682c-4326-8bf8-b10e96ee4ba5 19372358 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:595b5b9587] map[] [{apps/v1 ReplicaSet webserver-deployment-595b5b9587 c0555661-364f-4f68-87e6-3b199d6b2a76 0xc002ec06f7 0xc002ec06f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.365: INFO: Pod "webserver-deployment-c7997dcc8-4fvvn" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-4fvvn webserver-deployment-c7997dcc8- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-c7997dcc8-4fvvn 88aac16d-de9c-4572-814c-c5832bf9dacc 19372368 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 62a0e8fa-71a8-495c-9b73-87b7afc36204 0xc002ec0817 0xc002ec0818}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.365: INFO: Pod "webserver-deployment-c7997dcc8-5hjq5" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-5hjq5 webserver-deployment-c7997dcc8- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-c7997dcc8-5hjq5 fc5ce356-e331-411c-984c-5ca479ce6bf7 19372297 0 2020-05-26 21:13:29 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 62a0e8fa-71a8-495c-9b73-87b7afc36204 0xc002ec0947 0xc002ec0948}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-26 21:13:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.366: INFO: Pod "webserver-deployment-c7997dcc8-89j6f" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-89j6f webserver-deployment-c7997dcc8- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-c7997dcc8-89j6f 7c26c334-b07b-4c9e-98ba-41a889ae87f9 19372369 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 62a0e8fa-71a8-495c-9b73-87b7afc36204 0xc002ec0ac7 0xc002ec0ac8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.366: INFO: Pod "webserver-deployment-c7997dcc8-9hgzf" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-9hgzf webserver-deployment-c7997dcc8- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-c7997dcc8-9hgzf 21026885-4e17-461b-9011-553b90056f8a 19372350 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 62a0e8fa-71a8-495c-9b73-87b7afc36204 0xc002ec0bf7 0xc002ec0bf8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.366: INFO: Pod "webserver-deployment-c7997dcc8-cjrnt" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-cjrnt webserver-deployment-c7997dcc8- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-c7997dcc8-cjrnt c7d89d07-437f-46a3-9766-eb3966575f57 19372367 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 62a0e8fa-71a8-495c-9b73-87b7afc36204 0xc002ec0d27 0xc002ec0d28}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.366: INFO: Pod "webserver-deployment-c7997dcc8-fv8c6" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-fv8c6 webserver-deployment-c7997dcc8- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-c7997dcc8-fv8c6 8caeac27-577f-490e-8e48-006ac905cfb3 19372382 0 2020-05-26 21:13:32 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 62a0e8fa-71a8-495c-9b73-87b7afc36204 0xc002ec0e57 0xc002ec0e58}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.366: INFO: Pod "webserver-deployment-c7997dcc8-l7zq7" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-l7zq7 webserver-deployment-c7997dcc8- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-c7997dcc8-l7zq7 cbed989f-9980-4901-b026-8bcc1ca7c3fd 19372384 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 62a0e8fa-71a8-495c-9b73-87b7afc36204 0xc002ec0f87 0xc002ec0f88}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:32 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-26 21:13:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.366: INFO: Pod "webserver-deployment-c7997dcc8-p4tk4" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-p4tk4 webserver-deployment-c7997dcc8- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-c7997dcc8-p4tk4 113cd5c9-5284-42ed-ac83-1600a1518b4b 19372346 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 62a0e8fa-71a8-495c-9b73-87b7afc36204 0xc002ec1117 0xc002ec1118}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:31 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.366: INFO: Pod "webserver-deployment-c7997dcc8-trsgd" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-trsgd webserver-deployment-c7997dcc8- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-c7997dcc8-trsgd e81cba9b-4878-4c36-af43-87c3c95b197c 19372366 0 2020-05-26 21:13:31 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 62a0e8fa-71a8-495c-9b73-87b7afc36204 0xc002ec1247 0xc002ec1248}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.367: INFO: Pod "webserver-deployment-c7997dcc8-v6frv" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-v6frv webserver-deployment-c7997dcc8- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-c7997dcc8-v6frv f1364514-b7a0-4fbc-8770-7175c6962b66 19372272 0 2020-05-26 21:13:29 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 62a0e8fa-71a8-495c-9b73-87b7afc36204 0xc002ec1377 0xc002ec1378}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-26 21:13:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.367: INFO: Pod "webserver-deployment-c7997dcc8-wfw4q" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wfw4q webserver-deployment-c7997dcc8- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-c7997dcc8-wfw4q 61a00533-d88d-4676-95aa-3c5384bcb0f8 19372298 0 2020-05-26 21:13:29 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 62a0e8fa-71a8-495c-9b73-87b7afc36204 0xc002ec14f7 0xc002ec14f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-26 21:13:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.367: INFO: Pod "webserver-deployment-c7997dcc8-wgp6n" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-wgp6n webserver-deployment-c7997dcc8- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-c7997dcc8-wgp6n 157f67b6-e63a-4bd9-922a-f0dd10bcff86 19372302 0 2020-05-26 21:13:29 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 62a0e8fa-71a8-495c-9b73-87b7afc36204 0xc002ec1677 0xc002ec1678}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-26 21:13:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 21:13:32.367: INFO: Pod "webserver-deployment-c7997dcc8-xk8hs" is not available: &Pod{ObjectMeta:{webserver-deployment-c7997dcc8-xk8hs webserver-deployment-c7997dcc8- deployment-8083 /api/v1/namespaces/deployment-8083/pods/webserver-deployment-c7997dcc8-xk8hs 4591d944-f714-4bfb-b33a-8bf4c17f9a0c 19372279 0 2020-05-26 21:13:29 +0000 UTC map[name:httpd pod-template-hash:c7997dcc8] map[] [{apps/v1 ReplicaSet webserver-deployment-c7997dcc8 62a0e8fa-71a8-495c-9b73-87b7afc36204 0xc002ec17f7 0xc002ec17f8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-rz8wv,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-rz8wv,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:webserver:404,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-rz8wv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:13:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:,StartTime:2020-05-26 21:13:29 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:webserver:404,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:13:32.367: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-8083" for this suite. • [SLOW TEST:19.476 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support proportional scaling [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support proportional scaling [Conformance]","total":278,"completed":24,"skipped":423,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:13:32.623: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:13:32.913: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4650' May 26 21:13:35.745: INFO: stderr: "" May 26 21:13:35.745: INFO: stdout: "replicationcontroller/agnhost-master created\n" May 26 21:13:35.745: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4650' May 26 21:13:38.888: INFO: stderr: "" May 26 21:13:38.888: INFO: stdout: "service/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 26 21:13:41.288: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:13:41.288: INFO: Found 0 / 1 May 26 21:13:42.253: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:13:42.253: INFO: Found 0 / 1 May 26 21:13:43.228: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:13:43.228: INFO: Found 0 / 1 May 26 21:13:44.031: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:13:44.031: INFO: Found 0 / 1 May 26 21:13:45.224: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:13:45.224: INFO: Found 0 / 1 May 26 21:13:45.929: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:13:45.929: INFO: Found 0 / 1 May 26 21:13:47.038: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:13:47.038: INFO: Found 0 / 1 May 26 21:13:47.935: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:13:47.935: INFO: Found 0 / 1 May 26 21:13:49.000: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:13:49.000: INFO: Found 0 / 1 May 26 21:13:50.141: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:13:50.141: INFO: Found 1 / 1 May 26 21:13:50.141: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 26 21:13:50.169: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:13:50.169: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 26 21:13:50.170: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe pod agnhost-master-4pvwf --namespace=kubectl-4650' May 26 21:13:50.422: INFO: stderr: "" May 26 21:13:50.422: INFO: stdout: "Name: agnhost-master-4pvwf\nNamespace: kubectl-4650\nPriority: 0\nNode: jerma-worker/172.17.0.10\nStart Time: Tue, 26 May 2020 21:13:36 +0000\nLabels: app=agnhost\n role=master\nAnnotations: \nStatus: Running\nIP: 10.244.1.29\nIPs:\n IP: 10.244.1.29\nControlled By: ReplicationController/agnhost-master\nContainers:\n agnhost-master:\n Container ID: containerd://31e84ee0bef1f98595cd833a6a3242c342cf5f5d05c87107b01bcd4c35ae5bfc\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Image ID: gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5\n Port: 6379/TCP\n Host Port: 0/TCP\n State: Running\n Started: Tue, 26 May 2020 21:13:49 +0000\n Ready: True\n Restart Count: 0\n Environment: \n Mounts:\n /var/run/secrets/kubernetes.io/serviceaccount from default-token-zxpqr (ro)\nConditions:\n Type Status\n Initialized True \n Ready True \n ContainersReady True \n PodScheduled True \nVolumes:\n default-token-zxpqr:\n Type: Secret (a volume populated by a Secret)\n SecretName: default-token-zxpqr\n Optional: false\nQoS Class: BestEffort\nNode-Selectors: \nTolerations: node.kubernetes.io/not-ready:NoExecute for 300s\n node.kubernetes.io/unreachable:NoExecute for 300s\nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal Scheduled default-scheduler Successfully assigned kubectl-4650/agnhost-master-4pvwf to jerma-worker\n Normal Pulled 4s kubelet, jerma-worker Container image \"gcr.io/kubernetes-e2e-test-images/agnhost:2.8\" already present on machine\n Normal Created 1s kubelet, jerma-worker Created container agnhost-master\n Normal Started 1s kubelet, jerma-worker Started container agnhost-master\n" May 26 21:13:50.422: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe rc agnhost-master --namespace=kubectl-4650' May 26 21:13:50.814: INFO: stderr: "" May 26 21:13:50.814: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-4650\nSelector: app=agnhost,role=master\nLabels: app=agnhost\n role=master\nAnnotations: \nReplicas: 1 current / 1 desired\nPods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed\nPod Template:\n Labels: app=agnhost\n role=master\n Containers:\n agnhost-master:\n Image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8\n Port: 6379/TCP\n Host Port: 0/TCP\n Environment: \n Mounts: \n Volumes: \nEvents:\n Type Reason Age From Message\n ---- ------ ---- ---- -------\n Normal SuccessfulCreate 14s replication-controller Created pod: agnhost-master-4pvwf\n" May 26 21:13:50.814: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe service agnhost-master --namespace=kubectl-4650' May 26 21:13:51.654: INFO: stderr: "" May 26 21:13:51.654: INFO: stdout: "Name: agnhost-master\nNamespace: kubectl-4650\nLabels: app=agnhost\n role=master\nAnnotations: \nSelector: app=agnhost,role=master\nType: ClusterIP\nIP: 10.109.252.47\nPort: 6379/TCP\nTargetPort: agnhost-server/TCP\nEndpoints: 10.244.1.29:6379\nSession Affinity: None\nEvents: \n" May 26 21:13:51.703: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe node jerma-control-plane' May 26 21:13:52.219: INFO: stderr: "" May 26 21:13:52.220: INFO: stdout: "Name: jerma-control-plane\nRoles: master\nLabels: beta.kubernetes.io/arch=amd64\n beta.kubernetes.io/os=linux\n kubernetes.io/arch=amd64\n kubernetes.io/hostname=jerma-control-plane\n kubernetes.io/os=linux\n node-role.kubernetes.io/master=\nAnnotations: kubeadm.alpha.kubernetes.io/cri-socket: /run/containerd/containerd.sock\n node.alpha.kubernetes.io/ttl: 0\n volumes.kubernetes.io/controller-managed-attach-detach: true\nCreationTimestamp: Sun, 15 Mar 2020 18:25:55 +0000\nTaints: node-role.kubernetes.io/master:NoSchedule\nUnschedulable: false\nLease:\n HolderIdentity: jerma-control-plane\n AcquireTime: \n RenewTime: Tue, 26 May 2020 21:13:48 +0000\nConditions:\n Type Status LastHeartbeatTime LastTransitionTime Reason Message\n ---- ------ ----------------- ------------------ ------ -------\n MemoryPressure False Tue, 26 May 2020 21:11:14 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available\n DiskPressure False Tue, 26 May 2020 21:11:14 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure\n PIDPressure False Tue, 26 May 2020 21:11:14 +0000 Sun, 15 Mar 2020 18:25:55 +0000 KubeletHasSufficientPID kubelet has sufficient PID available\n Ready True Tue, 26 May 2020 21:11:14 +0000 Sun, 15 Mar 2020 18:26:27 +0000 KubeletReady kubelet is posting ready status\nAddresses:\n InternalIP: 172.17.0.9\n Hostname: jerma-control-plane\nCapacity:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nAllocatable:\n cpu: 16\n ephemeral-storage: 2303189964Ki\n hugepages-1Gi: 0\n hugepages-2Mi: 0\n memory: 131759892Ki\n pods: 110\nSystem Info:\n Machine ID: 3bcfb16fe77247d3af07bed975350d5c\n System UUID: 947a2db5-5527-4203-8af5-13d97ffe8a80\n Boot ID: ca2aa731-f890-4956-92a1-ff8c7560d571\n Kernel Version: 4.15.0-88-generic\n OS Image: Ubuntu 19.10\n Operating System: linux\n Architecture: amd64\n Container Runtime Version: containerd://1.3.2-31-gaa877d78\n Kubelet Version: v1.17.2\n Kube-Proxy Version: v1.17.2\nPodCIDR: 10.244.0.0/24\nPodCIDRs: 10.244.0.0/24\nNon-terminated Pods: (9 in total)\n Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits AGE\n --------- ---- ------------ ---------- --------------- ------------- ---\n kube-system coredns-6955765f44-rll5s 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 72d\n kube-system coredns-6955765f44-svxk5 100m (0%) 0 (0%) 70Mi (0%) 170Mi (0%) 72d\n kube-system etcd-jerma-control-plane 0 (0%) 0 (0%) 0 (0%) 0 (0%) 72d\n kube-system kindnet-bjddj 100m (0%) 100m (0%) 50Mi (0%) 50Mi (0%) 72d\n kube-system kube-apiserver-jerma-control-plane 250m (1%) 0 (0%) 0 (0%) 0 (0%) 72d\n kube-system kube-controller-manager-jerma-control-plane 200m (1%) 0 (0%) 0 (0%) 0 (0%) 72d\n kube-system kube-proxy-mm9zd 0 (0%) 0 (0%) 0 (0%) 0 (0%) 72d\n kube-system kube-scheduler-jerma-control-plane 100m (0%) 0 (0%) 0 (0%) 0 (0%) 72d\n local-path-storage local-path-provisioner-85445b74d4-7mg5w 0 (0%) 0 (0%) 0 (0%) 0 (0%) 72d\nAllocated resources:\n (Total limits may be over 100 percent, i.e., overcommitted.)\n Resource Requests Limits\n -------- -------- ------\n cpu 850m (5%) 100m (0%)\n memory 190Mi (0%) 390Mi (0%)\n ephemeral-storage 0 (0%) 0 (0%)\nEvents: \n" May 26 21:13:52.220: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config describe namespace kubectl-4650' May 26 21:13:53.597: INFO: stderr: "" May 26 21:13:53.597: INFO: stdout: "Name: kubectl-4650\nLabels: e2e-framework=kubectl\n e2e-run=6162fb5d-4c59-4505-bec6-b543f3144a54\nAnnotations: \nStatus: Active\n\nNo resource quota.\n\nNo LimitRange resource.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:13:53.597: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4650" for this suite. • [SLOW TEST:21.576 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl describe /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1047 should check if kubectl describe prints relevant information for rc and pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl describe should check if kubectl describe prints relevant information for rc and pods [Conformance]","total":278,"completed":25,"skipped":439,"failed":0} SSS ------------------------------ [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:13:54.199: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-4331/configmap-test-80ba67fa-6dec-47c0-afd5-935fdd44d2cc STEP: Creating a pod to test consume configMaps May 26 21:13:55.288: INFO: Waiting up to 5m0s for pod "pod-configmaps-cb7e8f37-50f9-4486-8a95-efd019036ca1" in namespace "configmap-4331" to be "success or failure" May 26 21:13:55.377: INFO: Pod "pod-configmaps-cb7e8f37-50f9-4486-8a95-efd019036ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 88.445801ms May 26 21:13:57.432: INFO: Pod "pod-configmaps-cb7e8f37-50f9-4486-8a95-efd019036ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 2.144229461s May 26 21:13:59.864: INFO: Pod "pod-configmaps-cb7e8f37-50f9-4486-8a95-efd019036ca1": Phase="Pending", Reason="", readiness=false. Elapsed: 4.575844966s May 26 21:14:01.868: INFO: Pod "pod-configmaps-cb7e8f37-50f9-4486-8a95-efd019036ca1": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.580099427s STEP: Saw pod success May 26 21:14:01.868: INFO: Pod "pod-configmaps-cb7e8f37-50f9-4486-8a95-efd019036ca1" satisfied condition "success or failure" May 26 21:14:01.872: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-cb7e8f37-50f9-4486-8a95-efd019036ca1 container env-test: STEP: delete the pod May 26 21:14:01.895: INFO: Waiting for pod pod-configmaps-cb7e8f37-50f9-4486-8a95-efd019036ca1 to disappear May 26 21:14:01.899: INFO: Pod pod-configmaps-cb7e8f37-50f9-4486-8a95-efd019036ca1 no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:14:01.899: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4331" for this suite. • [SLOW TEST:7.749 seconds] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap.go:31 should be consumable via environment variable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-node] ConfigMap should be consumable via environment variable [NodeConformance] [Conformance]","total":278,"completed":26,"skipped":442,"failed":0} [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:14:01.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8277.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-1.dns-test-service.dns-8277.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/wheezy_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8277.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-1.dns-test-service.dns-8277.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-1.dns-test-service.dns-8277.svc.cluster.local;test -n "$$(getent hosts dns-querier-1)" && echo OK > /results/jessie_hosts@dns-querier-1;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-8277.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe /etc/hosts STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 26 21:14:08.241: INFO: DNS probes using dns-8277/dns-test-2371efc7-dcb6-4f44-9c44-549faafd7374 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:14:08.288: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-8277" for this suite. • [SLOW TEST:6.454 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide /etc/hosts entries for the cluster [LinuxOnly] [Conformance]","total":278,"completed":27,"skipped":442,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:14:08.403: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename hostpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:37 [It] should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test hostPath mode May 26 21:14:08.889: INFO: Waiting up to 5m0s for pod "pod-host-path-test" in namespace "hostpath-488" to be "success or failure" May 26 21:14:08.900: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 10.374051ms May 26 21:14:10.989: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 2.099714562s May 26 21:14:12.996: INFO: Pod "pod-host-path-test": Phase="Pending", Reason="", readiness=false. Elapsed: 4.106576486s May 26 21:14:15.001: INFO: Pod "pod-host-path-test": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.111088964s STEP: Saw pod success May 26 21:14:15.001: INFO: Pod "pod-host-path-test" satisfied condition "success or failure" May 26 21:14:15.004: INFO: Trying to get logs from node jerma-worker2 pod pod-host-path-test container test-container-1: STEP: delete the pod May 26 21:14:15.088: INFO: Waiting for pod pod-host-path-test to disappear May 26 21:14:15.103: INFO: Pod pod-host-path-test no longer exists [AfterEach] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:14:15.103: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "hostpath-488" for this suite. • [SLOW TEST:6.709 seconds] [sig-storage] HostPath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/host_path.go:34 should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] HostPath should give a volume the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":28,"skipped":452,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:14:15.112: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-479c48fe-ebc8-4de1-8efc-eec751cb0a0b STEP: Creating a pod to test consume configMaps May 26 21:14:15.208: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-818e6c01-c874-4f9c-81fc-3b0a87dd3206" in namespace "projected-589" to be "success or failure" May 26 21:14:15.229: INFO: Pod "pod-projected-configmaps-818e6c01-c874-4f9c-81fc-3b0a87dd3206": Phase="Pending", Reason="", readiness=false. Elapsed: 21.67362ms May 26 21:14:17.232: INFO: Pod "pod-projected-configmaps-818e6c01-c874-4f9c-81fc-3b0a87dd3206": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024563343s May 26 21:14:19.236: INFO: Pod "pod-projected-configmaps-818e6c01-c874-4f9c-81fc-3b0a87dd3206": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028291292s STEP: Saw pod success May 26 21:14:19.236: INFO: Pod "pod-projected-configmaps-818e6c01-c874-4f9c-81fc-3b0a87dd3206" satisfied condition "success or failure" May 26 21:14:19.239: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-818e6c01-c874-4f9c-81fc-3b0a87dd3206 container projected-configmap-volume-test: STEP: delete the pod May 26 21:14:19.351: INFO: Waiting for pod pod-projected-configmaps-818e6c01-c874-4f9c-81fc-3b0a87dd3206 to disappear May 26 21:14:19.401: INFO: Pod pod-projected-configmaps-818e6c01-c874-4f9c-81fc-3b0a87dd3206 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:14:19.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-589" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":29,"skipped":481,"failed":0} S ------------------------------ [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:14:19.442: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:14:19.530: INFO: Waiting up to 5m0s for pod "busybox-user-65534-06f63838-c442-4859-8460-4cbe0a05760c" in namespace "security-context-test-4513" to be "success or failure" May 26 21:14:19.534: INFO: Pod "busybox-user-65534-06f63838-c442-4859-8460-4cbe0a05760c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.54319ms May 26 21:14:21.767: INFO: Pod "busybox-user-65534-06f63838-c442-4859-8460-4cbe0a05760c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.237182561s May 26 21:14:23.771: INFO: Pod "busybox-user-65534-06f63838-c442-4859-8460-4cbe0a05760c": Phase="Running", Reason="", readiness=true. Elapsed: 4.241650526s May 26 21:14:25.776: INFO: Pod "busybox-user-65534-06f63838-c442-4859-8460-4cbe0a05760c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.24630171s May 26 21:14:25.776: INFO: Pod "busybox-user-65534-06f63838-c442-4859-8460-4cbe0a05760c" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:14:25.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-4513" for this suite. • [SLOW TEST:6.344 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a container with runAsUser /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:43 should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a container with runAsUser should run the container with uid 65534 [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":30,"skipped":482,"failed":0} [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:14:25.786: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service clusterip-service with the type=ClusterIP in namespace services-6226 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-6226 STEP: creating replication controller externalsvc in namespace services-6226 I0526 21:14:26.147434 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-6226, replica count: 2 I0526 21:14:29.197881 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 1 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 21:14:32.198129 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the ClusterIP service to type=ExternalName May 26 21:14:32.302: INFO: Creating new exec pod May 26 21:14:36.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-6226 execpodgv5f6 -- /bin/sh -x -c nslookup clusterip-service' May 26 21:14:36.711: INFO: stderr: "I0526 21:14:36.468920 631 log.go:172] (0xc000b864d0) (0xc0006f1d60) Create stream\nI0526 21:14:36.469010 631 log.go:172] (0xc000b864d0) (0xc0006f1d60) Stream added, broadcasting: 1\nI0526 21:14:36.472245 631 log.go:172] (0xc000b864d0) Reply frame received for 1\nI0526 21:14:36.472292 631 log.go:172] (0xc000b864d0) (0xc0006f1e00) Create stream\nI0526 21:14:36.472304 631 log.go:172] (0xc000b864d0) (0xc0006f1e00) Stream added, broadcasting: 3\nI0526 21:14:36.473400 631 log.go:172] (0xc000b864d0) Reply frame received for 3\nI0526 21:14:36.473432 631 log.go:172] (0xc000b864d0) (0xc0006f1ea0) Create stream\nI0526 21:14:36.473441 631 log.go:172] (0xc000b864d0) (0xc0006f1ea0) Stream added, broadcasting: 5\nI0526 21:14:36.474483 631 log.go:172] (0xc000b864d0) Reply frame received for 5\nI0526 21:14:36.607711 631 log.go:172] (0xc000b864d0) Data frame received for 5\nI0526 21:14:36.607743 631 log.go:172] (0xc0006f1ea0) (5) Data frame handling\nI0526 21:14:36.607764 631 log.go:172] (0xc0006f1ea0) (5) Data frame sent\n+ nslookup clusterip-service\nI0526 21:14:36.700386 631 log.go:172] (0xc000b864d0) Data frame received for 3\nI0526 21:14:36.700435 631 log.go:172] (0xc0006f1e00) (3) Data frame handling\nI0526 21:14:36.700472 631 log.go:172] (0xc0006f1e00) (3) Data frame sent\nI0526 21:14:36.701785 631 log.go:172] (0xc000b864d0) Data frame received for 3\nI0526 21:14:36.701822 631 log.go:172] (0xc0006f1e00) (3) Data frame handling\nI0526 21:14:36.701851 631 log.go:172] (0xc0006f1e00) (3) Data frame sent\nI0526 21:14:36.702440 631 log.go:172] (0xc000b864d0) Data frame received for 3\nI0526 21:14:36.702485 631 log.go:172] (0xc0006f1e00) (3) Data frame handling\nI0526 21:14:36.702512 631 log.go:172] (0xc000b864d0) Data frame received for 5\nI0526 21:14:36.702535 631 log.go:172] (0xc0006f1ea0) (5) Data frame handling\nI0526 21:14:36.705052 631 log.go:172] (0xc000b864d0) Data frame received for 1\nI0526 21:14:36.705089 631 log.go:172] (0xc0006f1d60) (1) Data frame handling\nI0526 21:14:36.705310 631 log.go:172] (0xc0006f1d60) (1) Data frame sent\nI0526 21:14:36.705343 631 log.go:172] (0xc000b864d0) (0xc0006f1d60) Stream removed, broadcasting: 1\nI0526 21:14:36.705374 631 log.go:172] (0xc000b864d0) Go away received\nI0526 21:14:36.705863 631 log.go:172] (0xc000b864d0) (0xc0006f1d60) Stream removed, broadcasting: 1\nI0526 21:14:36.705895 631 log.go:172] (0xc000b864d0) (0xc0006f1e00) Stream removed, broadcasting: 3\nI0526 21:14:36.705915 631 log.go:172] (0xc000b864d0) (0xc0006f1ea0) Stream removed, broadcasting: 5\n" May 26 21:14:36.711: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nclusterip-service.services-6226.svc.cluster.local\tcanonical name = externalsvc.services-6226.svc.cluster.local.\nName:\texternalsvc.services-6226.svc.cluster.local\nAddress: 10.111.20.221\n\n" STEP: deleting ReplicationController externalsvc in namespace services-6226, will wait for the garbage collector to delete the pods May 26 21:14:36.772: INFO: Deleting ReplicationController externalsvc took: 6.754223ms May 26 21:14:37.072: INFO: Terminating ReplicationController externalsvc pods took: 300.251853ms May 26 21:14:49.620: INFO: Cleaning up the ClusterIP to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:14:49.648: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-6226" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:23.880 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ClusterIP to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ClusterIP to ExternalName [Conformance]","total":278,"completed":31,"skipped":482,"failed":0} SSSS ------------------------------ [k8s.io] Lease lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:14:49.667: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename lease-test STEP: Waiting for a default service account to be provisioned in namespace [It] lease API should be available [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Lease /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:14:49.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "lease-test-5272" for this suite. •{"msg":"PASSED [k8s.io] Lease lease API should be available [Conformance]","total":278,"completed":32,"skipped":486,"failed":0} SSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:14:49.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the rs STEP: Gathering metrics W0526 21:15:20.611340 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 21:15:20.611: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:15:20.611: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5383" for this suite. • [SLOW TEST:30.734 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan RS created by deployment when deleteOptions.PropagationPolicy is Orphan [Conformance]","total":278,"completed":33,"skipped":490,"failed":0} [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:15:20.617: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-44e113d1-d629-4eb6-9eeb-f570d3506fa5 STEP: Creating a pod to test consume configMaps May 26 21:15:20.755: INFO: Waiting up to 5m0s for pod "pod-configmaps-a831464b-2575-4124-9e7a-909791b0d726" in namespace "configmap-3745" to be "success or failure" May 26 21:15:20.759: INFO: Pod "pod-configmaps-a831464b-2575-4124-9e7a-909791b0d726": Phase="Pending", Reason="", readiness=false. Elapsed: 4.155546ms May 26 21:15:22.764: INFO: Pod "pod-configmaps-a831464b-2575-4124-9e7a-909791b0d726": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008417501s May 26 21:15:24.768: INFO: Pod "pod-configmaps-a831464b-2575-4124-9e7a-909791b0d726": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012524547s STEP: Saw pod success May 26 21:15:24.768: INFO: Pod "pod-configmaps-a831464b-2575-4124-9e7a-909791b0d726" satisfied condition "success or failure" May 26 21:15:24.771: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-a831464b-2575-4124-9e7a-909791b0d726 container configmap-volume-test: STEP: delete the pod May 26 21:15:24.792: INFO: Waiting for pod pod-configmaps-a831464b-2575-4124-9e7a-909791b0d726 to disappear May 26 21:15:24.795: INFO: Pod pod-configmaps-a831464b-2575-4124-9e7a-909791b0d726 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:15:24.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-3745" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume as non-root [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":34,"skipped":490,"failed":0} ------------------------------ [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:15:24.803: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1626 [It] should create a deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 26 21:15:24.953: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --generator=deployment/apps.v1 --namespace=kubectl-8145' May 26 21:15:25.069: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 26 21:15:25.069: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the deployment e2e-test-httpd-deployment was created STEP: verifying the pod controlled by deployment e2e-test-httpd-deployment was created [AfterEach] Kubectl run deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1631 May 26 21:15:29.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-8145' May 26 21:15:29.304: INFO: stderr: "" May 26 21:15:29.304: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:15:29.304: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8145" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run deployment should create a deployment from an image [Conformance]","total":278,"completed":35,"skipped":490,"failed":0} SSSSS ------------------------------ [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:15:29.312: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 26 21:15:29.428: INFO: Waiting up to 5m0s for pod "downward-api-c9804c7f-7019-42a6-a011-459253e7cbaf" in namespace "downward-api-5366" to be "success or failure" May 26 21:15:29.493: INFO: Pod "downward-api-c9804c7f-7019-42a6-a011-459253e7cbaf": Phase="Pending", Reason="", readiness=false. Elapsed: 64.349695ms May 26 21:15:31.497: INFO: Pod "downward-api-c9804c7f-7019-42a6-a011-459253e7cbaf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069171319s May 26 21:15:33.502: INFO: Pod "downward-api-c9804c7f-7019-42a6-a011-459253e7cbaf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.073468253s STEP: Saw pod success May 26 21:15:33.502: INFO: Pod "downward-api-c9804c7f-7019-42a6-a011-459253e7cbaf" satisfied condition "success or failure" May 26 21:15:33.504: INFO: Trying to get logs from node jerma-worker2 pod downward-api-c9804c7f-7019-42a6-a011-459253e7cbaf container dapi-container: STEP: delete the pod May 26 21:15:33.663: INFO: Waiting for pod downward-api-c9804c7f-7019-42a6-a011-459253e7cbaf to disappear May 26 21:15:33.676: INFO: Pod downward-api-c9804c7f-7019-42a6-a011-459253e7cbaf no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:15:33.676: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5366" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide default limits.cpu/memory from node allocatable [NodeConformance] [Conformance]","total":278,"completed":36,"skipped":495,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:15:33.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support remote command execution over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:15:33.807: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:15:37.928: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4489" for this suite. •{"msg":"PASSED [k8s.io] Pods should support remote command execution over websockets [NodeConformance] [Conformance]","total":278,"completed":37,"skipped":534,"failed":0} SSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:15:37.941: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Pod that fits quota STEP: Ensuring ResourceQuota status captures the pod usage STEP: Not allowing a pod to be created that exceeds remaining quota STEP: Not allowing a pod to be created that exceeds remaining quota(validation on extended resources) STEP: Ensuring a pod cannot update its resource requirements STEP: Ensuring attempts to update pod resource requirements did not change quota usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:15:51.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-7086" for this suite. • [SLOW TEST:13.230 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a pod. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a pod. [Conformance]","total":278,"completed":38,"skipped":542,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:15:51.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-8812 [It] should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 26 21:15:51.317: INFO: Found 0 stateful pods, waiting for 3 May 26 21:16:01.322: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 21:16:01.322: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 21:16:01.322: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=false May 26 21:16:11.322: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 21:16:11.322: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 21:16:11.322: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true May 26 21:16:11.333: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8812 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 21:16:11.611: INFO: stderr: "I0526 21:16:11.467289 693 log.go:172] (0xc0000f51e0) (0xc00098c000) Create stream\nI0526 21:16:11.467347 693 log.go:172] (0xc0000f51e0) (0xc00098c000) Stream added, broadcasting: 1\nI0526 21:16:11.469995 693 log.go:172] (0xc0000f51e0) Reply frame received for 1\nI0526 21:16:11.470034 693 log.go:172] (0xc0000f51e0) (0xc0006ddb80) Create stream\nI0526 21:16:11.470043 693 log.go:172] (0xc0000f51e0) (0xc0006ddb80) Stream added, broadcasting: 3\nI0526 21:16:11.471137 693 log.go:172] (0xc0000f51e0) Reply frame received for 3\nI0526 21:16:11.471177 693 log.go:172] (0xc0000f51e0) (0xc00021a000) Create stream\nI0526 21:16:11.471191 693 log.go:172] (0xc0000f51e0) (0xc00021a000) Stream added, broadcasting: 5\nI0526 21:16:11.472199 693 log.go:172] (0xc0000f51e0) Reply frame received for 5\nI0526 21:16:11.560191 693 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0526 21:16:11.560221 693 log.go:172] (0xc00021a000) (5) Data frame handling\nI0526 21:16:11.560234 693 log.go:172] (0xc00021a000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 21:16:11.602144 693 log.go:172] (0xc0000f51e0) Data frame received for 3\nI0526 21:16:11.602177 693 log.go:172] (0xc0006ddb80) (3) Data frame handling\nI0526 21:16:11.602190 693 log.go:172] (0xc0006ddb80) (3) Data frame sent\nI0526 21:16:11.602498 693 log.go:172] (0xc0000f51e0) Data frame received for 3\nI0526 21:16:11.602536 693 log.go:172] (0xc0006ddb80) (3) Data frame handling\nI0526 21:16:11.602880 693 log.go:172] (0xc0000f51e0) Data frame received for 5\nI0526 21:16:11.602906 693 log.go:172] (0xc00021a000) (5) Data frame handling\nI0526 21:16:11.604994 693 log.go:172] (0xc0000f51e0) Data frame received for 1\nI0526 21:16:11.605020 693 log.go:172] (0xc00098c000) (1) Data frame handling\nI0526 21:16:11.605038 693 log.go:172] (0xc00098c000) (1) Data frame sent\nI0526 21:16:11.605058 693 log.go:172] (0xc0000f51e0) (0xc00098c000) Stream removed, broadcasting: 1\nI0526 21:16:11.605084 693 log.go:172] (0xc0000f51e0) Go away received\nI0526 21:16:11.605800 693 log.go:172] (0xc0000f51e0) (0xc00098c000) Stream removed, broadcasting: 1\nI0526 21:16:11.605844 693 log.go:172] (0xc0000f51e0) (0xc0006ddb80) Stream removed, broadcasting: 3\nI0526 21:16:11.605871 693 log.go:172] (0xc0000f51e0) (0xc00021a000) Stream removed, broadcasting: 5\n" May 26 21:16:11.611: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 21:16:11.611: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' STEP: Updating StatefulSet template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 26 21:16:21.646: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Updating Pods in reverse ordinal order May 26 21:16:31.697: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8812 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 21:16:31.934: INFO: stderr: "I0526 21:16:31.831947 716 log.go:172] (0xc000590a50) (0xc0009b81e0) Create stream\nI0526 21:16:31.832004 716 log.go:172] (0xc000590a50) (0xc0009b81e0) Stream added, broadcasting: 1\nI0526 21:16:31.834629 716 log.go:172] (0xc000590a50) Reply frame received for 1\nI0526 21:16:31.834698 716 log.go:172] (0xc000590a50) (0xc000739540) Create stream\nI0526 21:16:31.834714 716 log.go:172] (0xc000590a50) (0xc000739540) Stream added, broadcasting: 3\nI0526 21:16:31.835754 716 log.go:172] (0xc000590a50) Reply frame received for 3\nI0526 21:16:31.835803 716 log.go:172] (0xc000590a50) (0xc00071db80) Create stream\nI0526 21:16:31.835817 716 log.go:172] (0xc000590a50) (0xc00071db80) Stream added, broadcasting: 5\nI0526 21:16:31.836931 716 log.go:172] (0xc000590a50) Reply frame received for 5\nI0526 21:16:31.925033 716 log.go:172] (0xc000590a50) Data frame received for 3\nI0526 21:16:31.925068 716 log.go:172] (0xc000739540) (3) Data frame handling\nI0526 21:16:31.925079 716 log.go:172] (0xc000739540) (3) Data frame sent\nI0526 21:16:31.925085 716 log.go:172] (0xc000590a50) Data frame received for 3\nI0526 21:16:31.925090 716 log.go:172] (0xc000739540) (3) Data frame handling\nI0526 21:16:31.925331 716 log.go:172] (0xc000590a50) Data frame received for 5\nI0526 21:16:31.925374 716 log.go:172] (0xc00071db80) (5) Data frame handling\nI0526 21:16:31.925636 716 log.go:172] (0xc00071db80) (5) Data frame sent\nI0526 21:16:31.925677 716 log.go:172] (0xc000590a50) Data frame received for 5\nI0526 21:16:31.925716 716 log.go:172] (0xc00071db80) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0526 21:16:31.927301 716 log.go:172] (0xc000590a50) Data frame received for 1\nI0526 21:16:31.927344 716 log.go:172] (0xc0009b81e0) (1) Data frame handling\nI0526 21:16:31.927381 716 log.go:172] (0xc0009b81e0) (1) Data frame sent\nI0526 21:16:31.927412 716 log.go:172] (0xc000590a50) (0xc0009b81e0) Stream removed, broadcasting: 1\nI0526 21:16:31.927460 716 log.go:172] (0xc000590a50) Go away received\nI0526 21:16:31.927971 716 log.go:172] (0xc000590a50) (0xc0009b81e0) Stream removed, broadcasting: 1\nI0526 21:16:31.927996 716 log.go:172] (0xc000590a50) (0xc000739540) Stream removed, broadcasting: 3\nI0526 21:16:31.928009 716 log.go:172] (0xc000590a50) (0xc00071db80) Stream removed, broadcasting: 5\n" May 26 21:16:31.934: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 21:16:31.934: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 26 21:16:52.144: INFO: Waiting for StatefulSet statefulset-8812/ss2 to complete update STEP: Rolling back to a previous revision May 26 21:17:02.150: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8812 ss2-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 21:17:02.417: INFO: stderr: "I0526 21:17:02.296985 739 log.go:172] (0xc000982a50) (0xc000996000) Create stream\nI0526 21:17:02.297041 739 log.go:172] (0xc000982a50) (0xc000996000) Stream added, broadcasting: 1\nI0526 21:17:02.299793 739 log.go:172] (0xc000982a50) Reply frame received for 1\nI0526 21:17:02.299841 739 log.go:172] (0xc000982a50) (0xc00064fb80) Create stream\nI0526 21:17:02.299869 739 log.go:172] (0xc000982a50) (0xc00064fb80) Stream added, broadcasting: 3\nI0526 21:17:02.300776 739 log.go:172] (0xc000982a50) Reply frame received for 3\nI0526 21:17:02.300830 739 log.go:172] (0xc000982a50) (0xc0009960a0) Create stream\nI0526 21:17:02.300841 739 log.go:172] (0xc000982a50) (0xc0009960a0) Stream added, broadcasting: 5\nI0526 21:17:02.302079 739 log.go:172] (0xc000982a50) Reply frame received for 5\nI0526 21:17:02.376705 739 log.go:172] (0xc000982a50) Data frame received for 5\nI0526 21:17:02.376734 739 log.go:172] (0xc0009960a0) (5) Data frame handling\nI0526 21:17:02.376753 739 log.go:172] (0xc0009960a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 21:17:02.408363 739 log.go:172] (0xc000982a50) Data frame received for 3\nI0526 21:17:02.408388 739 log.go:172] (0xc00064fb80) (3) Data frame handling\nI0526 21:17:02.408408 739 log.go:172] (0xc00064fb80) (3) Data frame sent\nI0526 21:17:02.408741 739 log.go:172] (0xc000982a50) Data frame received for 5\nI0526 21:17:02.408786 739 log.go:172] (0xc0009960a0) (5) Data frame handling\nI0526 21:17:02.409021 739 log.go:172] (0xc000982a50) Data frame received for 3\nI0526 21:17:02.409034 739 log.go:172] (0xc00064fb80) (3) Data frame handling\nI0526 21:17:02.410921 739 log.go:172] (0xc000982a50) Data frame received for 1\nI0526 21:17:02.410955 739 log.go:172] (0xc000996000) (1) Data frame handling\nI0526 21:17:02.410980 739 log.go:172] (0xc000996000) (1) Data frame sent\nI0526 21:17:02.411011 739 log.go:172] (0xc000982a50) (0xc000996000) Stream removed, broadcasting: 1\nI0526 21:17:02.411048 739 log.go:172] (0xc000982a50) Go away received\nI0526 21:17:02.411504 739 log.go:172] (0xc000982a50) (0xc000996000) Stream removed, broadcasting: 1\nI0526 21:17:02.411531 739 log.go:172] (0xc000982a50) (0xc00064fb80) Stream removed, broadcasting: 3\nI0526 21:17:02.411543 739 log.go:172] (0xc000982a50) (0xc0009960a0) Stream removed, broadcasting: 5\n" May 26 21:17:02.418: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 21:17:02.418: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss2-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 21:17:12.448: INFO: Updating stateful set ss2 STEP: Rolling back update in reverse ordinal order May 26 21:17:22.524: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-8812 ss2-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 21:17:22.753: INFO: stderr: "I0526 21:17:22.650546 761 log.go:172] (0xc0000f5600) (0xc00061bc20) Create stream\nI0526 21:17:22.650609 761 log.go:172] (0xc0000f5600) (0xc00061bc20) Stream added, broadcasting: 1\nI0526 21:17:22.666339 761 log.go:172] (0xc0000f5600) Reply frame received for 1\nI0526 21:17:22.666388 761 log.go:172] (0xc0000f5600) (0xc000265360) Create stream\nI0526 21:17:22.666401 761 log.go:172] (0xc0000f5600) (0xc000265360) Stream added, broadcasting: 3\nI0526 21:17:22.672927 761 log.go:172] (0xc0000f5600) Reply frame received for 3\nI0526 21:17:22.672956 761 log.go:172] (0xc0000f5600) (0xc000980000) Create stream\nI0526 21:17:22.672968 761 log.go:172] (0xc0000f5600) (0xc000980000) Stream added, broadcasting: 5\nI0526 21:17:22.675273 761 log.go:172] (0xc0000f5600) Reply frame received for 5\nI0526 21:17:22.744062 761 log.go:172] (0xc0000f5600) Data frame received for 3\nI0526 21:17:22.744108 761 log.go:172] (0xc000265360) (3) Data frame handling\nI0526 21:17:22.744120 761 log.go:172] (0xc000265360) (3) Data frame sent\nI0526 21:17:22.744129 761 log.go:172] (0xc0000f5600) Data frame received for 3\nI0526 21:17:22.744138 761 log.go:172] (0xc000265360) (3) Data frame handling\nI0526 21:17:22.744199 761 log.go:172] (0xc0000f5600) Data frame received for 5\nI0526 21:17:22.744240 761 log.go:172] (0xc000980000) (5) Data frame handling\nI0526 21:17:22.744275 761 log.go:172] (0xc000980000) (5) Data frame sent\nI0526 21:17:22.744297 761 log.go:172] (0xc0000f5600) Data frame received for 5\nI0526 21:17:22.744313 761 log.go:172] (0xc000980000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0526 21:17:22.745756 761 log.go:172] (0xc0000f5600) Data frame received for 1\nI0526 21:17:22.745790 761 log.go:172] (0xc00061bc20) (1) Data frame handling\nI0526 21:17:22.745831 761 log.go:172] (0xc00061bc20) (1) Data frame sent\nI0526 21:17:22.745885 761 log.go:172] (0xc0000f5600) (0xc00061bc20) Stream removed, broadcasting: 1\nI0526 21:17:22.745935 761 log.go:172] (0xc0000f5600) Go away received\nI0526 21:17:22.746442 761 log.go:172] (0xc0000f5600) (0xc00061bc20) Stream removed, broadcasting: 1\nI0526 21:17:22.746466 761 log.go:172] (0xc0000f5600) (0xc000265360) Stream removed, broadcasting: 3\nI0526 21:17:22.746478 761 log.go:172] (0xc0000f5600) (0xc000980000) Stream removed, broadcasting: 5\n" May 26 21:17:22.753: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 21:17:22.753: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss2-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 26 21:17:52.774: INFO: Deleting all statefulset in ns statefulset-8812 May 26 21:17:52.777: INFO: Scaling statefulset ss2 to 0 May 26 21:18:12.799: INFO: Waiting for statefulset status.replicas updated to 0 May 26 21:18:12.802: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:18:12.820: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-8812" for this suite. • [SLOW TEST:141.657 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform rolling updates and roll backs of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform rolling updates and roll backs of template modifications [Conformance]","total":278,"completed":39,"skipped":588,"failed":0} SSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:18:12.829: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override command May 26 21:18:12.940: INFO: Waiting up to 5m0s for pod "client-containers-52f0d107-5920-4c78-b905-8c59db049901" in namespace "containers-8655" to be "success or failure" May 26 21:18:12.957: INFO: Pod "client-containers-52f0d107-5920-4c78-b905-8c59db049901": Phase="Pending", Reason="", readiness=false. Elapsed: 16.908639ms May 26 21:18:15.304: INFO: Pod "client-containers-52f0d107-5920-4c78-b905-8c59db049901": Phase="Pending", Reason="", readiness=false. Elapsed: 2.363836097s May 26 21:18:17.309: INFO: Pod "client-containers-52f0d107-5920-4c78-b905-8c59db049901": Phase="Pending", Reason="", readiness=false. Elapsed: 4.368254889s May 26 21:18:19.314: INFO: Pod "client-containers-52f0d107-5920-4c78-b905-8c59db049901": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.373393509s STEP: Saw pod success May 26 21:18:19.314: INFO: Pod "client-containers-52f0d107-5920-4c78-b905-8c59db049901" satisfied condition "success or failure" May 26 21:18:19.317: INFO: Trying to get logs from node jerma-worker2 pod client-containers-52f0d107-5920-4c78-b905-8c59db049901 container test-container: STEP: delete the pod May 26 21:18:19.389: INFO: Waiting for pod client-containers-52f0d107-5920-4c78-b905-8c59db049901 to disappear May 26 21:18:19.507: INFO: Pod client-containers-52f0d107-5920-4c78-b905-8c59db049901 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:18:19.507: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8655" for this suite. • [SLOW TEST:6.688 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command (docker entrypoint) [NodeConformance] [Conformance]","total":278,"completed":40,"skipped":600,"failed":0} SS ------------------------------ [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:18:19.517: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 26 21:18:19.558: INFO: namespace kubectl-4711 May 26 21:18:19.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4711' May 26 21:18:22.314: INFO: stderr: "" May 26 21:18:22.314: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 26 21:18:23.318: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:18:23.318: INFO: Found 0 / 1 May 26 21:18:24.387: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:18:24.387: INFO: Found 0 / 1 May 26 21:18:25.319: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:18:25.319: INFO: Found 0 / 1 May 26 21:18:26.318: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:18:26.318: INFO: Found 1 / 1 May 26 21:18:26.318: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 May 26 21:18:26.322: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:18:26.322: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 26 21:18:26.322: INFO: wait on agnhost-master startup in kubectl-4711 May 26 21:18:26.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs agnhost-master-p26rq agnhost-master --namespace=kubectl-4711' May 26 21:18:26.441: INFO: stderr: "" May 26 21:18:26.441: INFO: stdout: "Paused\n" STEP: exposing RC May 26 21:18:26.441: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose rc agnhost-master --name=rm2 --port=1234 --target-port=6379 --namespace=kubectl-4711' May 26 21:18:26.584: INFO: stderr: "" May 26 21:18:26.584: INFO: stdout: "service/rm2 exposed\n" May 26 21:18:26.590: INFO: Service rm2 in namespace kubectl-4711 found. STEP: exposing service May 26 21:18:28.598: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config expose service rm2 --name=rm3 --port=2345 --target-port=6379 --namespace=kubectl-4711' May 26 21:18:28.761: INFO: stderr: "" May 26 21:18:28.761: INFO: stdout: "service/rm3 exposed\n" May 26 21:18:28.764: INFO: Service rm3 in namespace kubectl-4711 found. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:18:30.771: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4711" for this suite. • [SLOW TEST:11.263 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl expose /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1188 should create services for rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl expose should create services for rc [Conformance]","total":278,"completed":41,"skipped":602,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:18:30.780: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-b3763056-72ff-4f51-b133-14f98f0f06f7 STEP: Creating secret with name s-test-opt-upd-b396f5f8-e6e4-4554-8cb8-2cd4e92aecff STEP: Creating the pod STEP: Deleting secret s-test-opt-del-b3763056-72ff-4f51-b133-14f98f0f06f7 STEP: Updating secret s-test-opt-upd-b396f5f8-e6e4-4554-8cb8-2cd4e92aecff STEP: Creating secret with name s-test-opt-create-426e55ff-4fd2-4384-bec2-4cb9f3e3c48d STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:18:41.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-1255" for this suite. • [SLOW TEST:10.362 seconds] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_secret.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected secret optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":42,"skipped":640,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:18:41.143: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should project all components that make up the projection API [Projection][NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-projected-all-test-volume-4b14c71e-bede-42e1-9f96-79cb444ef2ca STEP: Creating secret with name secret-projected-all-test-volume-7d6b6ef0-54e0-4e04-b631-1931d12874b8 STEP: Creating a pod to test Check all projections for projected volume plugin May 26 21:18:41.283: INFO: Waiting up to 5m0s for pod "projected-volume-2edbc745-bbc2-4774-95f6-3241eabbcebd" in namespace "projected-4649" to be "success or failure" May 26 21:18:41.323: INFO: Pod "projected-volume-2edbc745-bbc2-4774-95f6-3241eabbcebd": Phase="Pending", Reason="", readiness=false. Elapsed: 39.472922ms May 26 21:18:43.328: INFO: Pod "projected-volume-2edbc745-bbc2-4774-95f6-3241eabbcebd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.044240924s May 26 21:18:45.332: INFO: Pod "projected-volume-2edbc745-bbc2-4774-95f6-3241eabbcebd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.04863458s STEP: Saw pod success May 26 21:18:45.332: INFO: Pod "projected-volume-2edbc745-bbc2-4774-95f6-3241eabbcebd" satisfied condition "success or failure" May 26 21:18:45.335: INFO: Trying to get logs from node jerma-worker2 pod projected-volume-2edbc745-bbc2-4774-95f6-3241eabbcebd container projected-all-volume-test: STEP: delete the pod May 26 21:18:45.377: INFO: Waiting for pod projected-volume-2edbc745-bbc2-4774-95f6-3241eabbcebd to disappear May 26 21:18:45.397: INFO: Pod projected-volume-2edbc745-bbc2-4774-95f6-3241eabbcebd no longer exists [AfterEach] [sig-storage] Projected combined /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:18:45.397: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4649" for this suite. •{"msg":"PASSED [sig-storage] Projected combined should project all components that make up the projection API [Projection][NodeConformance] [Conformance]","total":278,"completed":43,"skipped":663,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:18:45.406: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should provide secure master service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:18:45.483: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5579" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 •{"msg":"PASSED [sig-network] Services should provide secure master service [Conformance]","total":278,"completed":44,"skipped":698,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:18:45.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in different groups (two CRDs) show up in OpenAPI documentation May 26 21:18:45.599: INFO: >>> kubeConfig: /root/.kube/config May 26 21:18:48.544: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:19:02.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-4023" for this suite. • [SLOW TEST:17.493 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of different groups [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of different groups [Conformance]","total":278,"completed":45,"skipped":702,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:19:02.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 26 21:19:03.499: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 26 21:19:05.520: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726124743, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726124743, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726124743, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726124743, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 21:19:08.539: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:19:08.542: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: v2 custom resource should be converted [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:19:09.795: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-9826" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:6.874 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert from CR v1 to CR v2 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert from CR v1 to CR v2 [Conformance]","total":278,"completed":46,"skipped":736,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:19:09.858: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-f2aa80a1-fd58-4bb7-8742-3ac31b7c2c70 in namespace container-probe-7900 May 26 21:19:14.023: INFO: Started pod busybox-f2aa80a1-fd58-4bb7-8742-3ac31b7c2c70 in namespace container-probe-7900 STEP: checking the pod's current state and verifying that restartCount is present May 26 21:19:14.026: INFO: Initial restart count of pod busybox-f2aa80a1-fd58-4bb7-8742-3ac31b7c2c70 is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:23:15.134: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7900" for this suite. • [SLOW TEST:245.320 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":47,"skipped":762,"failed":0} SSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:23:15.178: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating Agnhost RC May 26 21:23:15.230: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6141' May 26 21:23:21.474: INFO: stderr: "" May 26 21:23:21.474: INFO: stdout: "replicationcontroller/agnhost-master created\n" STEP: Waiting for Agnhost master to start. May 26 21:23:22.504: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:23:22.504: INFO: Found 0 / 1 May 26 21:23:23.513: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:23:23.513: INFO: Found 0 / 1 May 26 21:23:24.478: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:23:24.478: INFO: Found 1 / 1 May 26 21:23:24.478: INFO: WaitFor completed with timeout 5m0s. Pods found = 1 out of 1 STEP: patching all pods May 26 21:23:24.481: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:23:24.481: INFO: ForEach: Found 1 pods from the filter. Now looping through them. May 26 21:23:24.481: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config patch pod agnhost-master-8kk9c --namespace=kubectl-6141 -p {"metadata":{"annotations":{"x":"y"}}}' May 26 21:23:24.578: INFO: stderr: "" May 26 21:23:24.578: INFO: stdout: "pod/agnhost-master-8kk9c patched\n" STEP: checking annotations May 26 21:23:24.584: INFO: Selector matched 1 pods for map[app:agnhost] May 26 21:23:24.584: INFO: ForEach: Found 1 pods from the filter. Now looping through them. [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:23:24.584: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6141" for this suite. • [SLOW TEST:9.412 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl patch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1432 should add annotations for pods in rc [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl patch should add annotations for pods in rc [Conformance]","total":278,"completed":48,"skipped":771,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:23:24.591: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-59fe2be4-56bc-410b-b8e5-90ceaa3e2a57 STEP: Creating a pod to test consume configMaps May 26 21:23:24.707: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-023895ce-5738-4811-ad88-fbcabf3ddca2" in namespace "projected-8324" to be "success or failure" May 26 21:23:24.710: INFO: Pod "pod-projected-configmaps-023895ce-5738-4811-ad88-fbcabf3ddca2": Phase="Pending", Reason="", readiness=false. Elapsed: 3.438926ms May 26 21:23:26.714: INFO: Pod "pod-projected-configmaps-023895ce-5738-4811-ad88-fbcabf3ddca2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007243847s May 26 21:23:28.718: INFO: Pod "pod-projected-configmaps-023895ce-5738-4811-ad88-fbcabf3ddca2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011693936s STEP: Saw pod success May 26 21:23:28.718: INFO: Pod "pod-projected-configmaps-023895ce-5738-4811-ad88-fbcabf3ddca2" satisfied condition "success or failure" May 26 21:23:28.722: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-023895ce-5738-4811-ad88-fbcabf3ddca2 container projected-configmap-volume-test: STEP: delete the pod May 26 21:23:28.756: INFO: Waiting for pod pod-projected-configmaps-023895ce-5738-4811-ad88-fbcabf3ddca2 to disappear May 26 21:23:28.761: INFO: Pod pod-projected-configmaps-023895ce-5738-4811-ad88-fbcabf3ddca2 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:23:28.761: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8324" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume as non-root [NodeConformance] [Conformance]","total":278,"completed":49,"skipped":777,"failed":0} SSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:23:28.769: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods Set QOS Class /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:178 [It] should be set on Pods with matching resource requests and limits for memory and cpu [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying QOS class is set on the pod [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:23:28.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-4918" for this suite. •{"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Pods Set QOS Class should be set on Pods with matching resource requests and limits for memory and cpu [Conformance]","total":278,"completed":50,"skipped":788,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:23:28.901: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-14f9d91e-4ce0-47a3-93d6-2c0910709950 STEP: Creating the pod STEP: Updating configmap configmap-test-upd-14f9d91e-4ce0-47a3-93d6-2c0910709950 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:24:57.677: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-391" for this suite. • [SLOW TEST:88.785 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":51,"skipped":798,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:24:57.687: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 26 21:25:05.815: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 21:25:05.818: INFO: Pod pod-with-prestop-http-hook still exists May 26 21:25:07.818: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 21:25:07.823: INFO: Pod pod-with-prestop-http-hook still exists May 26 21:25:09.818: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 21:25:09.823: INFO: Pod pod-with-prestop-http-hook still exists May 26 21:25:11.818: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 21:25:11.822: INFO: Pod pod-with-prestop-http-hook still exists May 26 21:25:13.818: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 21:25:13.823: INFO: Pod pod-with-prestop-http-hook still exists May 26 21:25:15.818: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 21:25:15.823: INFO: Pod pod-with-prestop-http-hook still exists May 26 21:25:17.818: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 21:25:17.822: INFO: Pod pod-with-prestop-http-hook still exists May 26 21:25:19.818: INFO: Waiting for pod pod-with-prestop-http-hook to disappear May 26 21:25:19.823: INFO: Pod pod-with-prestop-http-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:25:19.842: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-2098" for this suite. • [SLOW TEST:22.161 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop http hook properly [NodeConformance] [Conformance]","total":278,"completed":52,"skipped":813,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:25:19.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-6148 STEP: creating a selector STEP: Creating the service pods in kubernetes May 26 21:25:19.915: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 26 21:25:46.093: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.1.50 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6148 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:25:46.093: INFO: >>> kubeConfig: /root/.kube/config I0526 21:25:46.127886 6 log.go:172] (0xc0023ac630) (0xc0028f9860) Create stream I0526 21:25:46.127917 6 log.go:172] (0xc0023ac630) (0xc0028f9860) Stream added, broadcasting: 1 I0526 21:25:46.130722 6 log.go:172] (0xc0023ac630) Reply frame received for 1 I0526 21:25:46.130782 6 log.go:172] (0xc0023ac630) (0xc0028f9900) Create stream I0526 21:25:46.130808 6 log.go:172] (0xc0023ac630) (0xc0028f9900) Stream added, broadcasting: 3 I0526 21:25:46.132246 6 log.go:172] (0xc0023ac630) Reply frame received for 3 I0526 21:25:46.132281 6 log.go:172] (0xc0023ac630) (0xc0028f99a0) Create stream I0526 21:25:46.132300 6 log.go:172] (0xc0023ac630) (0xc0028f99a0) Stream added, broadcasting: 5 I0526 21:25:46.133630 6 log.go:172] (0xc0023ac630) Reply frame received for 5 I0526 21:25:47.275086 6 log.go:172] (0xc0023ac630) Data frame received for 3 I0526 21:25:47.275133 6 log.go:172] (0xc0028f9900) (3) Data frame handling I0526 21:25:47.275164 6 log.go:172] (0xc0028f9900) (3) Data frame sent I0526 21:25:47.275184 6 log.go:172] (0xc0023ac630) Data frame received for 3 I0526 21:25:47.275202 6 log.go:172] (0xc0028f9900) (3) Data frame handling I0526 21:25:47.275313 6 log.go:172] (0xc0023ac630) Data frame received for 5 I0526 21:25:47.275346 6 log.go:172] (0xc0028f99a0) (5) Data frame handling I0526 21:25:47.277514 6 log.go:172] (0xc0023ac630) Data frame received for 1 I0526 21:25:47.277543 6 log.go:172] (0xc0028f9860) (1) Data frame handling I0526 21:25:47.277562 6 log.go:172] (0xc0028f9860) (1) Data frame sent I0526 21:25:47.282572 6 log.go:172] (0xc0023ac630) (0xc0028f9860) Stream removed, broadcasting: 1 I0526 21:25:47.282635 6 log.go:172] (0xc0023ac630) Go away received I0526 21:25:47.282853 6 log.go:172] (0xc0023ac630) (0xc0028f9860) Stream removed, broadcasting: 1 I0526 21:25:47.282992 6 log.go:172] (0xc0023ac630) (0xc0028f9900) Stream removed, broadcasting: 3 I0526 21:25:47.283008 6 log.go:172] (0xc0023ac630) (0xc0028f99a0) Stream removed, broadcasting: 5 May 26 21:25:47.283: INFO: Found all expected endpoints: [netserver-0] May 26 21:25:47.286: INFO: ExecWithOptions {Command:[/bin/sh -c echo hostName | nc -w 1 -u 10.244.2.100 8081 | grep -v '^\s*$'] Namespace:pod-network-test-6148 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:25:47.286: INFO: >>> kubeConfig: /root/.kube/config I0526 21:25:47.318755 6 log.go:172] (0xc001af46e0) (0xc00285adc0) Create stream I0526 21:25:47.318783 6 log.go:172] (0xc001af46e0) (0xc00285adc0) Stream added, broadcasting: 1 I0526 21:25:47.322030 6 log.go:172] (0xc001af46e0) Reply frame received for 1 I0526 21:25:47.322058 6 log.go:172] (0xc001af46e0) (0xc0027cb180) Create stream I0526 21:25:47.322068 6 log.go:172] (0xc001af46e0) (0xc0027cb180) Stream added, broadcasting: 3 I0526 21:25:47.323381 6 log.go:172] (0xc001af46e0) Reply frame received for 3 I0526 21:25:47.323417 6 log.go:172] (0xc001af46e0) (0xc00285ae60) Create stream I0526 21:25:47.323432 6 log.go:172] (0xc001af46e0) (0xc00285ae60) Stream added, broadcasting: 5 I0526 21:25:47.324415 6 log.go:172] (0xc001af46e0) Reply frame received for 5 I0526 21:25:48.528283 6 log.go:172] (0xc001af46e0) Data frame received for 3 I0526 21:25:48.528319 6 log.go:172] (0xc0027cb180) (3) Data frame handling I0526 21:25:48.528338 6 log.go:172] (0xc0027cb180) (3) Data frame sent I0526 21:25:48.528664 6 log.go:172] (0xc001af46e0) Data frame received for 5 I0526 21:25:48.528716 6 log.go:172] (0xc00285ae60) (5) Data frame handling I0526 21:25:48.529009 6 log.go:172] (0xc001af46e0) Data frame received for 3 I0526 21:25:48.529033 6 log.go:172] (0xc0027cb180) (3) Data frame handling I0526 21:25:48.531097 6 log.go:172] (0xc001af46e0) Data frame received for 1 I0526 21:25:48.531163 6 log.go:172] (0xc00285adc0) (1) Data frame handling I0526 21:25:48.531184 6 log.go:172] (0xc00285adc0) (1) Data frame sent I0526 21:25:48.531199 6 log.go:172] (0xc001af46e0) (0xc00285adc0) Stream removed, broadcasting: 1 I0526 21:25:48.531217 6 log.go:172] (0xc001af46e0) Go away received I0526 21:25:48.531325 6 log.go:172] (0xc001af46e0) (0xc00285adc0) Stream removed, broadcasting: 1 I0526 21:25:48.531356 6 log.go:172] (0xc001af46e0) (0xc0027cb180) Stream removed, broadcasting: 3 I0526 21:25:48.531376 6 log.go:172] (0xc001af46e0) (0xc00285ae60) Stream removed, broadcasting: 5 May 26 21:25:48.531: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:25:48.531: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-6148" for this suite. • [SLOW TEST:28.691 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: udp [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":53,"skipped":824,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:25:48.540: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 21:25:49.394: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 21:25:51.603: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125149, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125149, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125149, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125149, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 21:25:54.693: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:25:55.017: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6489-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:25:56.290: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5054" for this suite. STEP: Destroying namespace "webhook-5054-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.837 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource [Conformance]","total":278,"completed":54,"skipped":829,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:25:56.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating 50 configmaps STEP: Creating RC which spawns configmap-volume pods May 26 21:25:57.543: INFO: Pod name wrapped-volume-race-aca48988-c00b-40eb-8e76-f2502c27810d: Found 0 pods out of 5 May 26 21:26:02.558: INFO: Pod name wrapped-volume-race-aca48988-c00b-40eb-8e76-f2502c27810d: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-aca48988-c00b-40eb-8e76-f2502c27810d in namespace emptydir-wrapper-7091, will wait for the garbage collector to delete the pods May 26 21:26:16.647: INFO: Deleting ReplicationController wrapped-volume-race-aca48988-c00b-40eb-8e76-f2502c27810d took: 15.951975ms May 26 21:26:16.748: INFO: Terminating ReplicationController wrapped-volume-race-aca48988-c00b-40eb-8e76-f2502c27810d pods took: 100.239525ms STEP: Creating RC which spawns configmap-volume pods May 26 21:26:30.582: INFO: Pod name wrapped-volume-race-9aeea545-5e3b-4be0-88aa-ed9ca189cc93: Found 0 pods out of 5 May 26 21:26:35.592: INFO: Pod name wrapped-volume-race-9aeea545-5e3b-4be0-88aa-ed9ca189cc93: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9aeea545-5e3b-4be0-88aa-ed9ca189cc93 in namespace emptydir-wrapper-7091, will wait for the garbage collector to delete the pods May 26 21:26:51.672: INFO: Deleting ReplicationController wrapped-volume-race-9aeea545-5e3b-4be0-88aa-ed9ca189cc93 took: 7.321081ms May 26 21:26:51.972: INFO: Terminating ReplicationController wrapped-volume-race-9aeea545-5e3b-4be0-88aa-ed9ca189cc93 pods took: 300.226827ms STEP: Creating RC which spawns configmap-volume pods May 26 21:26:59.917: INFO: Pod name wrapped-volume-race-9ccda6c7-a95d-4a28-9b8e-48bf93abc30e: Found 0 pods out of 5 May 26 21:27:04.927: INFO: Pod name wrapped-volume-race-9ccda6c7-a95d-4a28-9b8e-48bf93abc30e: Found 5 pods out of 5 STEP: Ensuring each pod is running STEP: deleting ReplicationController wrapped-volume-race-9ccda6c7-a95d-4a28-9b8e-48bf93abc30e in namespace emptydir-wrapper-7091, will wait for the garbage collector to delete the pods May 26 21:27:21.028: INFO: Deleting ReplicationController wrapped-volume-race-9ccda6c7-a95d-4a28-9b8e-48bf93abc30e took: 7.587964ms May 26 21:27:21.329: INFO: Terminating ReplicationController wrapped-volume-race-9ccda6c7-a95d-4a28-9b8e-48bf93abc30e pods took: 300.471913ms STEP: Cleaning up the configMaps [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:27:30.283: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-7091" for this suite. • [SLOW TEST:93.944 seconds] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 should not cause race condition when used for configmaps [Serial] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not cause race condition when used for configmaps [Serial] [Conformance]","total":278,"completed":55,"skipped":834,"failed":0} SSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:27:30.322: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 26 21:27:30.468: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:30.490: INFO: Number of nodes with available pods: 0 May 26 21:27:30.490: INFO: Node jerma-worker is running more than one daemon pod May 26 21:27:31.496: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:31.501: INFO: Number of nodes with available pods: 0 May 26 21:27:31.501: INFO: Node jerma-worker is running more than one daemon pod May 26 21:27:32.810: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:32.899: INFO: Number of nodes with available pods: 0 May 26 21:27:32.899: INFO: Node jerma-worker is running more than one daemon pod May 26 21:27:33.494: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:33.498: INFO: Number of nodes with available pods: 0 May 26 21:27:33.498: INFO: Node jerma-worker is running more than one daemon pod May 26 21:27:34.801: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:34.805: INFO: Number of nodes with available pods: 1 May 26 21:27:34.805: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:35.552: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:35.555: INFO: Number of nodes with available pods: 1 May 26 21:27:35.555: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:36.509: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:36.524: INFO: Number of nodes with available pods: 2 May 26 21:27:36.524: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Stop a daemon pod, check that the daemon pod is revived. May 26 21:27:36.604: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:36.619: INFO: Number of nodes with available pods: 1 May 26 21:27:36.619: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:37.647: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:37.659: INFO: Number of nodes with available pods: 1 May 26 21:27:37.659: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:38.624: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:38.643: INFO: Number of nodes with available pods: 1 May 26 21:27:38.643: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:39.625: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:39.628: INFO: Number of nodes with available pods: 1 May 26 21:27:39.628: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:40.625: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:40.628: INFO: Number of nodes with available pods: 1 May 26 21:27:40.628: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:41.625: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:41.629: INFO: Number of nodes with available pods: 1 May 26 21:27:41.629: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:42.623: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:42.626: INFO: Number of nodes with available pods: 1 May 26 21:27:42.626: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:43.625: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:43.628: INFO: Number of nodes with available pods: 1 May 26 21:27:43.628: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:44.625: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:44.628: INFO: Number of nodes with available pods: 1 May 26 21:27:44.628: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:45.625: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:45.628: INFO: Number of nodes with available pods: 1 May 26 21:27:45.628: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:46.624: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:46.628: INFO: Number of nodes with available pods: 1 May 26 21:27:46.628: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:47.626: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:47.629: INFO: Number of nodes with available pods: 1 May 26 21:27:47.629: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:48.625: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:48.629: INFO: Number of nodes with available pods: 1 May 26 21:27:48.629: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:49.624: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:49.628: INFO: Number of nodes with available pods: 1 May 26 21:27:49.628: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:50.642: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:50.645: INFO: Number of nodes with available pods: 1 May 26 21:27:50.645: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:51.714: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:51.718: INFO: Number of nodes with available pods: 1 May 26 21:27:51.718: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:52.624: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:52.627: INFO: Number of nodes with available pods: 1 May 26 21:27:52.627: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:27:53.625: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:27:53.628: INFO: Number of nodes with available pods: 2 May 26 21:27:53.628: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-3433, will wait for the garbage collector to delete the pods May 26 21:27:53.708: INFO: Deleting DaemonSet.extensions daemon-set took: 6.677247ms May 26 21:27:54.009: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.52866ms May 26 21:27:59.612: INFO: Number of nodes with available pods: 0 May 26 21:27:59.612: INFO: Number of running nodes: 0, number of available pods: 0 May 26 21:27:59.619: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-3433/daemonsets","resourceVersion":"19377383"},"items":null} May 26 21:27:59.622: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-3433/pods","resourceVersion":"19377383"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:27:59.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-3433" for this suite. • [SLOW TEST:29.317 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop simple daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop simple daemon [Conformance]","total":278,"completed":56,"skipped":840,"failed":0} SSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:27:59.639: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:27:59.729: INFO: Creating ReplicaSet my-hostname-basic-45bfacbb-ff9a-4ac1-bc17-b54b292ea65d May 26 21:27:59.753: INFO: Pod name my-hostname-basic-45bfacbb-ff9a-4ac1-bc17-b54b292ea65d: Found 0 pods out of 1 May 26 21:28:04.758: INFO: Pod name my-hostname-basic-45bfacbb-ff9a-4ac1-bc17-b54b292ea65d: Found 1 pods out of 1 May 26 21:28:04.758: INFO: Ensuring a pod for ReplicaSet "my-hostname-basic-45bfacbb-ff9a-4ac1-bc17-b54b292ea65d" is running May 26 21:28:04.764: INFO: Pod "my-hostname-basic-45bfacbb-ff9a-4ac1-bc17-b54b292ea65d-29kv7" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 21:27:59 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 21:28:03 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 21:28:03 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 21:27:59 +0000 UTC Reason: Message:}]) May 26 21:28:04.764: INFO: Trying to dial the pod May 26 21:28:09.775: INFO: Controller my-hostname-basic-45bfacbb-ff9a-4ac1-bc17-b54b292ea65d: Got expected result from replica 1 [my-hostname-basic-45bfacbb-ff9a-4ac1-bc17-b54b292ea65d-29kv7]: "my-hostname-basic-45bfacbb-ff9a-4ac1-bc17-b54b292ea65d-29kv7", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:28:09.776: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-5259" for this suite. • [SLOW TEST:10.144 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":57,"skipped":851,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:28:09.784: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-7b8ebbf7-f5ac-40dd-8c97-3a0e7a761d48 STEP: Creating a pod to test consume secrets May 26 21:28:09.980: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-14d84178-fe4d-4b91-8dcf-4e1ab450e441" in namespace "projected-9800" to be "success or failure" May 26 21:28:10.027: INFO: Pod "pod-projected-secrets-14d84178-fe4d-4b91-8dcf-4e1ab450e441": Phase="Pending", Reason="", readiness=false. Elapsed: 46.560445ms May 26 21:28:12.031: INFO: Pod "pod-projected-secrets-14d84178-fe4d-4b91-8dcf-4e1ab450e441": Phase="Pending", Reason="", readiness=false. Elapsed: 2.050578033s May 26 21:28:14.035: INFO: Pod "pod-projected-secrets-14d84178-fe4d-4b91-8dcf-4e1ab450e441": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.0550761s STEP: Saw pod success May 26 21:28:14.035: INFO: Pod "pod-projected-secrets-14d84178-fe4d-4b91-8dcf-4e1ab450e441" satisfied condition "success or failure" May 26 21:28:14.039: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-14d84178-fe4d-4b91-8dcf-4e1ab450e441 container projected-secret-volume-test: STEP: delete the pod May 26 21:28:14.164: INFO: Waiting for pod pod-projected-secrets-14d84178-fe4d-4b91-8dcf-4e1ab450e441 to disappear May 26 21:28:14.200: INFO: Pod pod-projected-secrets-14d84178-fe4d-4b91-8dcf-4e1ab450e441 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:28:14.201: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9800" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume as non-root with defaultMode and fsGroup set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":58,"skipped":894,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:28:14.208: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-a74b48af-1cd5-4b22-b22d-8e4ea4997198 STEP: Creating a pod to test consume configMaps May 26 21:28:14.290: INFO: Waiting up to 5m0s for pod "pod-configmaps-8b3a7338-bf71-4f8c-bb21-ba81a3f57edc" in namespace "configmap-9483" to be "success or failure" May 26 21:28:14.315: INFO: Pod "pod-configmaps-8b3a7338-bf71-4f8c-bb21-ba81a3f57edc": Phase="Pending", Reason="", readiness=false. Elapsed: 24.128448ms May 26 21:28:16.319: INFO: Pod "pod-configmaps-8b3a7338-bf71-4f8c-bb21-ba81a3f57edc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.028602058s May 26 21:28:18.324: INFO: Pod "pod-configmaps-8b3a7338-bf71-4f8c-bb21-ba81a3f57edc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.033518095s STEP: Saw pod success May 26 21:28:18.324: INFO: Pod "pod-configmaps-8b3a7338-bf71-4f8c-bb21-ba81a3f57edc" satisfied condition "success or failure" May 26 21:28:18.327: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-8b3a7338-bf71-4f8c-bb21-ba81a3f57edc container configmap-volume-test: STEP: delete the pod May 26 21:28:18.378: INFO: Waiting for pod pod-configmaps-8b3a7338-bf71-4f8c-bb21-ba81a3f57edc to disappear May 26 21:28:18.405: INFO: Pod pod-configmaps-8b3a7338-bf71-4f8c-bb21-ba81a3f57edc no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:28:18.405: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9483" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":59,"skipped":918,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:28:18.424: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not best effort scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a best-effort pod STEP: Ensuring resource quota with best effort scope captures the pod usage STEP: Ensuring resource quota with not best effort ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a not best-effort pod STEP: Ensuring resource quota with not best effort scope captures the pod usage STEP: Ensuring resource quota with best effort scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:28:34.786: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8568" for this suite. • [SLOW TEST:16.369 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with best effort scope. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with best effort scope. [Conformance]","total":278,"completed":60,"skipped":940,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:28:34.794: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod busybox-2fceaa9a-de8e-40c8-9606-363d689f281e in namespace container-probe-7108 May 26 21:28:39.018: INFO: Started pod busybox-2fceaa9a-de8e-40c8-9606-363d689f281e in namespace container-probe-7108 STEP: checking the pod's current state and verifying that restartCount is present May 26 21:28:39.021: INFO: Initial restart count of pod busybox-2fceaa9a-de8e-40c8-9606-363d689f281e is 0 May 26 21:29:29.213: INFO: Restart count of pod container-probe-7108/busybox-2fceaa9a-de8e-40c8-9606-363d689f281e is now 1 (50.192556942s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:29:29.252: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-7108" for this suite. • [SLOW TEST:54.487 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a exec "cat /tmp/health" liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a exec \"cat /tmp/health\" liveness probe [NodeConformance] [Conformance]","total":278,"completed":61,"skipped":955,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:29:29.282: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Container 'terminate-cmd-rpa': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpa': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpa': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpa': should get the expected 'State' STEP: Container 'terminate-cmd-rpa': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpof': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpof': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpof': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpof': should get the expected 'State' STEP: Container 'terminate-cmd-rpof': should be possible to delete [NodeConformance] STEP: Container 'terminate-cmd-rpn': should get the expected 'RestartCount' STEP: Container 'terminate-cmd-rpn': should get the expected 'Phase' STEP: Container 'terminate-cmd-rpn': should get the expected 'Ready' condition STEP: Container 'terminate-cmd-rpn': should get the expected 'State' STEP: Container 'terminate-cmd-rpn': should be possible to delete [NodeConformance] [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:30:01.450: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-8621" for this suite. • [SLOW TEST:32.175 seconds] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 blackbox test /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:38 when starting a container that exits /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/runtime.go:39 should run with the expected status [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Runtime blackbox test when starting a container that exits should run with the expected status [NodeConformance] [Conformance]","total":278,"completed":62,"skipped":1004,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:30:01.457: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 21:30:02.318: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 21:30:04.329: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125402, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125402, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125402, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125402, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 21:30:06.757: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125402, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125402, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125402, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125402, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 21:30:09.374: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the crd webhook via the AdmissionRegistration API STEP: Creating a custom resource definition that should be denied by the webhook May 26 21:30:09.407: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:30:09.442: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-7634" for this suite. STEP: Destroying namespace "webhook-7634-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.060 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should deny crd creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should deny crd creation [Conformance]","total":278,"completed":63,"skipped":1030,"failed":0} SSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:30:09.518: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service-2.dns-3697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service-2.dns-3697.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3697.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local;check="$$(dig +notcp +noall +answer +search dns-test-service-2.dns-3697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service-2.dns-3697.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service-2.dns-3697.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service-2.dns-3697.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-3697.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 26 21:30:15.627: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:15.631: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:15.635: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:15.637: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:15.647: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:15.651: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:15.655: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:15.658: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:15.663: INFO: Lookups using dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3697.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3697.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local jessie_udp@dns-test-service-2.dns-3697.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3697.svc.cluster.local] May 26 21:30:20.669: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:20.673: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:20.677: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:20.681: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:20.691: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:20.703: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:20.706: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:20.708: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:20.712: INFO: Lookups using dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3697.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3697.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local jessie_udp@dns-test-service-2.dns-3697.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3697.svc.cluster.local] May 26 21:30:25.668: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:25.671: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:25.674: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:25.676: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:25.686: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:25.689: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:25.692: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:25.695: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:25.705: INFO: Lookups using dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3697.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3697.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local jessie_udp@dns-test-service-2.dns-3697.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3697.svc.cluster.local] May 26 21:30:30.669: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:30.673: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:30.677: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:30.679: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:30.688: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:30.691: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:30.695: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:30.699: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:30.705: INFO: Lookups using dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3697.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3697.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local jessie_udp@dns-test-service-2.dns-3697.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3697.svc.cluster.local] May 26 21:30:35.680: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:35.683: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:35.695: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:35.698: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:35.707: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:35.710: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:35.712: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:35.714: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:35.718: INFO: Lookups using dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3697.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3697.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local jessie_udp@dns-test-service-2.dns-3697.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3697.svc.cluster.local] May 26 21:30:40.669: INFO: Unable to read wheezy_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:40.673: INFO: Unable to read wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:40.677: INFO: Unable to read wheezy_udp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:40.681: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:40.690: INFO: Unable to read jessie_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:40.694: INFO: Unable to read jessie_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:40.697: INFO: Unable to read jessie_udp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:40.700: INFO: Unable to read jessie_tcp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:40.706: INFO: Lookups using dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3 failed for: [wheezy_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local wheezy_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local wheezy_udp@dns-test-service-2.dns-3697.svc.cluster.local wheezy_tcp@dns-test-service-2.dns-3697.svc.cluster.local jessie_udp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local jessie_tcp@dns-querier-2.dns-test-service-2.dns-3697.svc.cluster.local jessie_udp@dns-test-service-2.dns-3697.svc.cluster.local jessie_tcp@dns-test-service-2.dns-3697.svc.cluster.local] May 26 21:30:45.689: INFO: Unable to read wheezy_tcp@dns-test-service-2.dns-3697.svc.cluster.local from pod dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3: the server could not find the requested resource (get pods dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3) May 26 21:30:45.731: INFO: Lookups using dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3 failed for: [wheezy_tcp@dns-test-service-2.dns-3697.svc.cluster.local] May 26 21:30:50.704: INFO: DNS probes using dns-3697/dns-test-f6380cd8-a2c4-47f9-b976-7152eb6e47a3 succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:30:50.826: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3697" for this suite. • [SLOW TEST:41.396 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Subdomain [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Subdomain [Conformance]","total":278,"completed":64,"skipped":1038,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:30:50.915: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-8573edb5-49ec-4009-a1d0-ba47a8c5e3da STEP: Creating a pod to test consume secrets May 26 21:30:51.520: INFO: Waiting up to 5m0s for pod "pod-secrets-7811b02f-308c-45ae-b798-091839132888" in namespace "secrets-8473" to be "success or failure" May 26 21:30:51.528: INFO: Pod "pod-secrets-7811b02f-308c-45ae-b798-091839132888": Phase="Pending", Reason="", readiness=false. Elapsed: 7.785216ms May 26 21:30:53.698: INFO: Pod "pod-secrets-7811b02f-308c-45ae-b798-091839132888": Phase="Pending", Reason="", readiness=false. Elapsed: 2.178017791s May 26 21:30:55.703: INFO: Pod "pod-secrets-7811b02f-308c-45ae-b798-091839132888": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.182376373s STEP: Saw pod success May 26 21:30:55.703: INFO: Pod "pod-secrets-7811b02f-308c-45ae-b798-091839132888" satisfied condition "success or failure" May 26 21:30:55.705: INFO: Trying to get logs from node jerma-worker pod pod-secrets-7811b02f-308c-45ae-b798-091839132888 container secret-volume-test: STEP: delete the pod May 26 21:30:55.739: INFO: Waiting for pod pod-secrets-7811b02f-308c-45ae-b798-091839132888 to disappear May 26 21:30:55.744: INFO: Pod pod-secrets-7811b02f-308c-45ae-b798-091839132888 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:30:55.744: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-8473" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":65,"skipped":1100,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:30:55.772: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 26 21:30:55.836: INFO: Waiting up to 5m0s for pod "pod-f6edc60e-2607-43d8-8add-6bdfe7a031da" in namespace "emptydir-7941" to be "success or failure" May 26 21:30:55.858: INFO: Pod "pod-f6edc60e-2607-43d8-8add-6bdfe7a031da": Phase="Pending", Reason="", readiness=false. Elapsed: 22.530409ms May 26 21:30:57.861: INFO: Pod "pod-f6edc60e-2607-43d8-8add-6bdfe7a031da": Phase="Pending", Reason="", readiness=false. Elapsed: 2.025555699s May 26 21:30:59.866: INFO: Pod "pod-f6edc60e-2607-43d8-8add-6bdfe7a031da": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029752064s STEP: Saw pod success May 26 21:30:59.866: INFO: Pod "pod-f6edc60e-2607-43d8-8add-6bdfe7a031da" satisfied condition "success or failure" May 26 21:30:59.868: INFO: Trying to get logs from node jerma-worker2 pod pod-f6edc60e-2607-43d8-8add-6bdfe7a031da container test-container: STEP: delete the pod May 26 21:30:59.963: INFO: Waiting for pod pod-f6edc60e-2607-43d8-8add-6bdfe7a031da to disappear May 26 21:30:59.967: INFO: Pod pod-f6edc60e-2607-43d8-8add-6bdfe7a031da no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:30:59.967: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7941" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":66,"skipped":1138,"failed":0} SS ------------------------------ [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:30:59.973: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should allow opting out of API token automount [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token May 26 21:31:00.546: INFO: created pod pod-service-account-defaultsa May 26 21:31:00.546: INFO: pod pod-service-account-defaultsa service account token volume mount: true May 26 21:31:00.572: INFO: created pod pod-service-account-mountsa May 26 21:31:00.572: INFO: pod pod-service-account-mountsa service account token volume mount: true May 26 21:31:00.602: INFO: created pod pod-service-account-nomountsa May 26 21:31:00.602: INFO: pod pod-service-account-nomountsa service account token volume mount: false May 26 21:31:00.630: INFO: created pod pod-service-account-defaultsa-mountspec May 26 21:31:00.630: INFO: pod pod-service-account-defaultsa-mountspec service account token volume mount: true May 26 21:31:00.719: INFO: created pod pod-service-account-mountsa-mountspec May 26 21:31:00.719: INFO: pod pod-service-account-mountsa-mountspec service account token volume mount: true May 26 21:31:00.729: INFO: created pod pod-service-account-nomountsa-mountspec May 26 21:31:00.729: INFO: pod pod-service-account-nomountsa-mountspec service account token volume mount: true May 26 21:31:00.765: INFO: created pod pod-service-account-defaultsa-nomountspec May 26 21:31:00.765: INFO: pod pod-service-account-defaultsa-nomountspec service account token volume mount: false May 26 21:31:00.780: INFO: created pod pod-service-account-mountsa-nomountspec May 26 21:31:00.780: INFO: pod pod-service-account-mountsa-nomountspec service account token volume mount: false May 26 21:31:00.811: INFO: created pod pod-service-account-nomountsa-nomountspec May 26 21:31:00.811: INFO: pod pod-service-account-nomountsa-nomountspec service account token volume mount: false [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:31:00.811: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-6703" for this suite. •{"msg":"PASSED [sig-auth] ServiceAccounts should allow opting out of API token automount [Conformance]","total":278,"completed":67,"skipped":1140,"failed":0} ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:31:00.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 21:31:03.178: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 21:31:06.129: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125463, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125463, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125463, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125462, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 21:31:08.399: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125463, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125463, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125463, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125462, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 21:31:10.309: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125463, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125463, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125463, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125462, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 21:31:12.411: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125463, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125463, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125463, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125462, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 21:31:15.435: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod STEP: 'kubectl attach' the pod, should be denied by the webhook May 26 21:31:19.528: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config attach --namespace=webhook-3413 to-be-attached-pod -i -c=container1' May 26 21:31:19.659: INFO: rc: 1 [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:31:19.665: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3413" for this suite. STEP: Destroying namespace "webhook-3413-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:18.869 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny attaching pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny attaching pod [Conformance]","total":278,"completed":68,"skipped":1140,"failed":0} SSSS ------------------------------ [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:31:19.795: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-8ce28cc7-e678-4079-bd43-8b995b1a6ea3 STEP: Creating a pod to test consume configMaps May 26 21:31:19.939: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-35576b0e-9638-4fa3-9f0c-8695baf04bd6" in namespace "projected-9828" to be "success or failure" May 26 21:31:19.942: INFO: Pod "pod-projected-configmaps-35576b0e-9638-4fa3-9f0c-8695baf04bd6": Phase="Pending", Reason="", readiness=false. Elapsed: 3.28023ms May 26 21:31:21.946: INFO: Pod "pod-projected-configmaps-35576b0e-9638-4fa3-9f0c-8695baf04bd6": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007496113s May 26 21:31:23.950: INFO: Pod "pod-projected-configmaps-35576b0e-9638-4fa3-9f0c-8695baf04bd6": Phase="Running", Reason="", readiness=true. Elapsed: 4.011507631s May 26 21:31:25.955: INFO: Pod "pod-projected-configmaps-35576b0e-9638-4fa3-9f0c-8695baf04bd6": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016298328s STEP: Saw pod success May 26 21:31:25.955: INFO: Pod "pod-projected-configmaps-35576b0e-9638-4fa3-9f0c-8695baf04bd6" satisfied condition "success or failure" May 26 21:31:25.959: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-35576b0e-9638-4fa3-9f0c-8695baf04bd6 container projected-configmap-volume-test: STEP: delete the pod May 26 21:31:26.023: INFO: Waiting for pod pod-projected-configmaps-35576b0e-9638-4fa3-9f0c-8695baf04bd6 to disappear May 26 21:31:26.045: INFO: Pod pod-projected-configmaps-35576b0e-9638-4fa3-9f0c-8695baf04bd6 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:31:26.045: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9828" for this suite. • [SLOW TEST:6.256 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable in multiple volumes in the same pod [NodeConformance] [Conformance]","total":278,"completed":69,"skipped":1144,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:31:26.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 26 21:31:26.112: INFO: Waiting up to 5m0s for pod "pod-b87dbd0d-40f3-4c83-a30b-44c6ef76d42a" in namespace "emptydir-5709" to be "success or failure" May 26 21:31:26.116: INFO: Pod "pod-b87dbd0d-40f3-4c83-a30b-44c6ef76d42a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.743971ms May 26 21:31:28.165: INFO: Pod "pod-b87dbd0d-40f3-4c83-a30b-44c6ef76d42a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.053508835s May 26 21:31:30.176: INFO: Pod "pod-b87dbd0d-40f3-4c83-a30b-44c6ef76d42a": Phase="Running", Reason="", readiness=true. Elapsed: 4.064790467s May 26 21:31:32.181: INFO: Pod "pod-b87dbd0d-40f3-4c83-a30b-44c6ef76d42a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.069385714s STEP: Saw pod success May 26 21:31:32.181: INFO: Pod "pod-b87dbd0d-40f3-4c83-a30b-44c6ef76d42a" satisfied condition "success or failure" May 26 21:31:32.184: INFO: Trying to get logs from node jerma-worker2 pod pod-b87dbd0d-40f3-4c83-a30b-44c6ef76d42a container test-container: STEP: delete the pod May 26 21:31:32.223: INFO: Waiting for pod pod-b87dbd0d-40f3-4c83-a30b-44c6ef76d42a to disappear May 26 21:31:32.236: INFO: Pod pod-b87dbd0d-40f3-4c83-a30b-44c6ef76d42a no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:31:32.236: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5709" for this suite. • [SLOW TEST:6.225 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":70,"skipped":1153,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:31:32.278: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 21:31:32.357: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4633f03e-4774-4cf1-8e6f-a7b30cdb4fb0" in namespace "projected-9094" to be "success or failure" May 26 21:31:32.362: INFO: Pod "downwardapi-volume-4633f03e-4774-4cf1-8e6f-a7b30cdb4fb0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.247323ms May 26 21:31:34.365: INFO: Pod "downwardapi-volume-4633f03e-4774-4cf1-8e6f-a7b30cdb4fb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007818517s May 26 21:31:36.370: INFO: Pod "downwardapi-volume-4633f03e-4774-4cf1-8e6f-a7b30cdb4fb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012333327s STEP: Saw pod success May 26 21:31:36.370: INFO: Pod "downwardapi-volume-4633f03e-4774-4cf1-8e6f-a7b30cdb4fb0" satisfied condition "success or failure" May 26 21:31:36.372: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-4633f03e-4774-4cf1-8e6f-a7b30cdb4fb0 container client-container: STEP: delete the pod May 26 21:31:36.556: INFO: Waiting for pod downwardapi-volume-4633f03e-4774-4cf1-8e6f-a7b30cdb4fb0 to disappear May 26 21:31:36.566: INFO: Pod downwardapi-volume-4633f03e-4774-4cf1-8e6f-a7b30cdb4fb0 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:31:36.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9094" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":71,"skipped":1168,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:31:36.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-16794d01-383b-4459-ae14-9b3e8f1b0e12 STEP: Creating a pod to test consume secrets May 26 21:31:36.699: INFO: Waiting up to 5m0s for pod "pod-secrets-e75a1aae-4d1d-4747-8411-8e2b3dad1b58" in namespace "secrets-4535" to be "success or failure" May 26 21:31:36.710: INFO: Pod "pod-secrets-e75a1aae-4d1d-4747-8411-8e2b3dad1b58": Phase="Pending", Reason="", readiness=false. Elapsed: 10.654009ms May 26 21:31:38.715: INFO: Pod "pod-secrets-e75a1aae-4d1d-4747-8411-8e2b3dad1b58": Phase="Pending", Reason="", readiness=false. Elapsed: 2.015610401s May 26 21:31:40.719: INFO: Pod "pod-secrets-e75a1aae-4d1d-4747-8411-8e2b3dad1b58": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.020392872s STEP: Saw pod success May 26 21:31:40.719: INFO: Pod "pod-secrets-e75a1aae-4d1d-4747-8411-8e2b3dad1b58" satisfied condition "success or failure" May 26 21:31:40.723: INFO: Trying to get logs from node jerma-worker pod pod-secrets-e75a1aae-4d1d-4747-8411-8e2b3dad1b58 container secret-env-test: STEP: delete the pod May 26 21:31:40.741: INFO: Waiting for pod pod-secrets-e75a1aae-4d1d-4747-8411-8e2b3dad1b58 to disappear May 26 21:31:40.814: INFO: Pod pod-secrets-e75a1aae-4d1d-4747-8411-8e2b3dad1b58 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:31:40.814: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4535" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable from pods in env vars [NodeConformance] [Conformance]","total":278,"completed":72,"skipped":1201,"failed":0} SS ------------------------------ [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:31:40.822: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:31:44.936: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-2986" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a read only busybox container should not write to root filesystem [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":73,"skipped":1203,"failed":0} ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:31:44.944: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-9be08273-23a9-4d0a-a686-53da032b10b3 STEP: Creating a pod to test consume configMaps May 26 21:31:45.017: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-b6114846-3348-42f7-a32f-68080ea260c8" in namespace "projected-3322" to be "success or failure" May 26 21:31:45.022: INFO: Pod "pod-projected-configmaps-b6114846-3348-42f7-a32f-68080ea260c8": Phase="Pending", Reason="", readiness=false. Elapsed: 4.57352ms May 26 21:31:47.026: INFO: Pod "pod-projected-configmaps-b6114846-3348-42f7-a32f-68080ea260c8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008891352s May 26 21:31:49.031: INFO: Pod "pod-projected-configmaps-b6114846-3348-42f7-a32f-68080ea260c8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013431874s STEP: Saw pod success May 26 21:31:49.031: INFO: Pod "pod-projected-configmaps-b6114846-3348-42f7-a32f-68080ea260c8" satisfied condition "success or failure" May 26 21:31:49.034: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-b6114846-3348-42f7-a32f-68080ea260c8 container projected-configmap-volume-test: STEP: delete the pod May 26 21:31:49.060: INFO: Waiting for pod pod-projected-configmaps-b6114846-3348-42f7-a32f-68080ea260c8 to disappear May 26 21:31:49.064: INFO: Pod pod-projected-configmaps-b6114846-3348-42f7-a32f-68080ea260c8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:31:49.064: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-3322" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":74,"skipped":1203,"failed":0} ------------------------------ [sig-network] DNS should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:31:49.070: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should support configurable pod DNS nameservers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod with dnsPolicy=None and customized dnsConfig... May 26 21:31:49.126: INFO: Created pod &Pod{ObjectMeta:{dns-3571 dns-3571 /api/v1/namespaces/dns-3571/pods/dns-3571 bca2a99f-263a-405e-9a75-2bc571acde20 19378780 0 2020-05-26 21:31:49 +0000 UTC map[] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-pkz86,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-pkz86,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[pause],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-pkz86,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:None,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:&PodDNSConfig{Nameservers:[1.1.1.1],Searches:[resolv.conf.local],Options:[]PodDNSConfigOption{},},ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: Verifying customized DNS suffix list is configured on pod... May 26 21:31:53.148: INFO: ExecWithOptions {Command:[/agnhost dns-suffix] Namespace:dns-3571 PodName:dns-3571 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:31:53.148: INFO: >>> kubeConfig: /root/.kube/config I0526 21:31:53.181408 6 log.go:172] (0xc001af44d0) (0xc001dbf040) Create stream I0526 21:31:53.181459 6 log.go:172] (0xc001af44d0) (0xc001dbf040) Stream added, broadcasting: 1 I0526 21:31:53.183790 6 log.go:172] (0xc001af44d0) Reply frame received for 1 I0526 21:31:53.183850 6 log.go:172] (0xc001af44d0) (0xc001aa6000) Create stream I0526 21:31:53.183870 6 log.go:172] (0xc001af44d0) (0xc001aa6000) Stream added, broadcasting: 3 I0526 21:31:53.184923 6 log.go:172] (0xc001af44d0) Reply frame received for 3 I0526 21:31:53.184962 6 log.go:172] (0xc001af44d0) (0xc001aa6140) Create stream I0526 21:31:53.184977 6 log.go:172] (0xc001af44d0) (0xc001aa6140) Stream added, broadcasting: 5 I0526 21:31:53.186215 6 log.go:172] (0xc001af44d0) Reply frame received for 5 I0526 21:31:53.284121 6 log.go:172] (0xc001af44d0) Data frame received for 3 I0526 21:31:53.284154 6 log.go:172] (0xc001aa6000) (3) Data frame handling I0526 21:31:53.284180 6 log.go:172] (0xc001aa6000) (3) Data frame sent I0526 21:31:53.285778 6 log.go:172] (0xc001af44d0) Data frame received for 5 I0526 21:31:53.285824 6 log.go:172] (0xc001aa6140) (5) Data frame handling I0526 21:31:53.285854 6 log.go:172] (0xc001af44d0) Data frame received for 3 I0526 21:31:53.285867 6 log.go:172] (0xc001aa6000) (3) Data frame handling I0526 21:31:53.287466 6 log.go:172] (0xc001af44d0) Data frame received for 1 I0526 21:31:53.287479 6 log.go:172] (0xc001dbf040) (1) Data frame handling I0526 21:31:53.287486 6 log.go:172] (0xc001dbf040) (1) Data frame sent I0526 21:31:53.287494 6 log.go:172] (0xc001af44d0) (0xc001dbf040) Stream removed, broadcasting: 1 I0526 21:31:53.287502 6 log.go:172] (0xc001af44d0) Go away received I0526 21:31:53.287646 6 log.go:172] (0xc001af44d0) (0xc001dbf040) Stream removed, broadcasting: 1 I0526 21:31:53.287690 6 log.go:172] (0xc001af44d0) (0xc001aa6000) Stream removed, broadcasting: 3 I0526 21:31:53.287714 6 log.go:172] (0xc001af44d0) (0xc001aa6140) Stream removed, broadcasting: 5 STEP: Verifying customized DNS server is configured on pod... May 26 21:31:53.287: INFO: ExecWithOptions {Command:[/agnhost dns-server-list] Namespace:dns-3571 PodName:dns-3571 ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:31:53.287: INFO: >>> kubeConfig: /root/.kube/config I0526 21:31:53.323693 6 log.go:172] (0xc0007be6e0) (0xc002177860) Create stream I0526 21:31:53.323721 6 log.go:172] (0xc0007be6e0) (0xc002177860) Stream added, broadcasting: 1 I0526 21:31:53.326004 6 log.go:172] (0xc0007be6e0) Reply frame received for 1 I0526 21:31:53.326059 6 log.go:172] (0xc0007be6e0) (0xc001aa61e0) Create stream I0526 21:31:53.326166 6 log.go:172] (0xc0007be6e0) (0xc001aa61e0) Stream added, broadcasting: 3 I0526 21:31:53.327156 6 log.go:172] (0xc0007be6e0) Reply frame received for 3 I0526 21:31:53.327184 6 log.go:172] (0xc0007be6e0) (0xc001aa6320) Create stream I0526 21:31:53.327196 6 log.go:172] (0xc0007be6e0) (0xc001aa6320) Stream added, broadcasting: 5 I0526 21:31:53.328019 6 log.go:172] (0xc0007be6e0) Reply frame received for 5 I0526 21:31:53.414476 6 log.go:172] (0xc0007be6e0) Data frame received for 3 I0526 21:31:53.414510 6 log.go:172] (0xc001aa61e0) (3) Data frame handling I0526 21:31:53.414652 6 log.go:172] (0xc001aa61e0) (3) Data frame sent I0526 21:31:53.416630 6 log.go:172] (0xc0007be6e0) Data frame received for 5 I0526 21:31:53.416657 6 log.go:172] (0xc001aa6320) (5) Data frame handling I0526 21:31:53.416683 6 log.go:172] (0xc0007be6e0) Data frame received for 3 I0526 21:31:53.416699 6 log.go:172] (0xc001aa61e0) (3) Data frame handling I0526 21:31:53.418846 6 log.go:172] (0xc0007be6e0) Data frame received for 1 I0526 21:31:53.418859 6 log.go:172] (0xc002177860) (1) Data frame handling I0526 21:31:53.418866 6 log.go:172] (0xc002177860) (1) Data frame sent I0526 21:31:53.418968 6 log.go:172] (0xc0007be6e0) (0xc002177860) Stream removed, broadcasting: 1 I0526 21:31:53.419068 6 log.go:172] (0xc0007be6e0) Go away received I0526 21:31:53.419137 6 log.go:172] (0xc0007be6e0) (0xc002177860) Stream removed, broadcasting: 1 I0526 21:31:53.419164 6 log.go:172] (0xc0007be6e0) (0xc001aa61e0) Stream removed, broadcasting: 3 I0526 21:31:53.419179 6 log.go:172] (0xc0007be6e0) (0xc001aa6320) Stream removed, broadcasting: 5 May 26 21:31:53.419: INFO: Deleting pod dns-3571... [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:31:53.475: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-3571" for this suite. •{"msg":"PASSED [sig-network] DNS should support configurable pod DNS nameservers [Conformance]","total":278,"completed":75,"skipped":1203,"failed":0} SSSS ------------------------------ [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:31:53.524: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name s-test-opt-del-cc55cd12-c514-4682-a87e-2533648d4042 STEP: Creating secret with name s-test-opt-upd-55e87cea-cc3d-427a-8832-8d4e1279e5e8 STEP: Creating the pod STEP: Deleting secret s-test-opt-del-cc55cd12-c514-4682-a87e-2533648d4042 STEP: Updating secret s-test-opt-upd-55e87cea-cc3d-427a-8832-8d4e1279e5e8 STEP: Creating secret with name s-test-opt-create-b4f1abf6-886b-47ec-9253-c7730e54c387 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:32:02.077: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-245" for this suite. • [SLOW TEST:8.582 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":76,"skipped":1207,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:32:02.107: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring job reaches completions [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:32:18.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-7205" for this suite. • [SLOW TEST:16.157 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run a job to completion when tasks sometimes fail and are locally restarted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should run a job to completion when tasks sometimes fail and are locally restarted [Conformance]","total":278,"completed":77,"skipped":1243,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:32:18.264: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 21:32:18.521: INFO: Waiting up to 5m0s for pod "downwardapi-volume-51187dc2-92d2-47d4-8f1f-c1eaecc53542" in namespace "downward-api-2571" to be "success or failure" May 26 21:32:18.531: INFO: Pod "downwardapi-volume-51187dc2-92d2-47d4-8f1f-c1eaecc53542": Phase="Pending", Reason="", readiness=false. Elapsed: 10.142542ms May 26 21:32:20.705: INFO: Pod "downwardapi-volume-51187dc2-92d2-47d4-8f1f-c1eaecc53542": Phase="Pending", Reason="", readiness=false. Elapsed: 2.184487159s May 26 21:32:22.710: INFO: Pod "downwardapi-volume-51187dc2-92d2-47d4-8f1f-c1eaecc53542": Phase="Running", Reason="", readiness=true. Elapsed: 4.188919799s May 26 21:32:24.715: INFO: Pod "downwardapi-volume-51187dc2-92d2-47d4-8f1f-c1eaecc53542": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.19385629s STEP: Saw pod success May 26 21:32:24.715: INFO: Pod "downwardapi-volume-51187dc2-92d2-47d4-8f1f-c1eaecc53542" satisfied condition "success or failure" May 26 21:32:24.718: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-51187dc2-92d2-47d4-8f1f-c1eaecc53542 container client-container: STEP: delete the pod May 26 21:32:24.772: INFO: Waiting for pod downwardapi-volume-51187dc2-92d2-47d4-8f1f-c1eaecc53542 to disappear May 26 21:32:24.784: INFO: Pod downwardapi-volume-51187dc2-92d2-47d4-8f1f-c1eaecc53542 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:32:24.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2571" for this suite. • [SLOW TEST:6.527 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide podname only [NodeConformance] [Conformance]","total":278,"completed":78,"skipped":1252,"failed":0} SSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:32:24.791: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 21:32:24.876: INFO: Waiting up to 5m0s for pod "downwardapi-volume-6ded77fc-ca34-48c5-84ce-09580a6e347f" in namespace "downward-api-9286" to be "success or failure" May 26 21:32:24.879: INFO: Pod "downwardapi-volume-6ded77fc-ca34-48c5-84ce-09580a6e347f": Phase="Pending", Reason="", readiness=false. Elapsed: 3.56335ms May 26 21:32:26.891: INFO: Pod "downwardapi-volume-6ded77fc-ca34-48c5-84ce-09580a6e347f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014752232s May 26 21:32:28.932: INFO: Pod "downwardapi-volume-6ded77fc-ca34-48c5-84ce-09580a6e347f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.056555707s STEP: Saw pod success May 26 21:32:28.933: INFO: Pod "downwardapi-volume-6ded77fc-ca34-48c5-84ce-09580a6e347f" satisfied condition "success or failure" May 26 21:32:28.936: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-6ded77fc-ca34-48c5-84ce-09580a6e347f container client-container: STEP: delete the pod May 26 21:32:28.974: INFO: Waiting for pod downwardapi-volume-6ded77fc-ca34-48c5-84ce-09580a6e347f to disappear May 26 21:32:28.987: INFO: Pod downwardapi-volume-6ded77fc-ca34-48c5-84ce-09580a6e347f no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:32:28.987: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9286" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":79,"skipped":1260,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:32:28.994: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 21:32:29.091: INFO: Waiting up to 5m0s for pod "downwardapi-volume-ce552415-b282-43f3-a2d3-9f48e819fbb4" in namespace "downward-api-5885" to be "success or failure" May 26 21:32:29.094: INFO: Pod "downwardapi-volume-ce552415-b282-43f3-a2d3-9f48e819fbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.521578ms May 26 21:32:31.099: INFO: Pod "downwardapi-volume-ce552415-b282-43f3-a2d3-9f48e819fbb4": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007222599s May 26 21:32:33.103: INFO: Pod "downwardapi-volume-ce552415-b282-43f3-a2d3-9f48e819fbb4": Phase="Running", Reason="", readiness=true. Elapsed: 4.011976185s May 26 21:32:35.108: INFO: Pod "downwardapi-volume-ce552415-b282-43f3-a2d3-9f48e819fbb4": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.016517756s STEP: Saw pod success May 26 21:32:35.108: INFO: Pod "downwardapi-volume-ce552415-b282-43f3-a2d3-9f48e819fbb4" satisfied condition "success or failure" May 26 21:32:35.112: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-ce552415-b282-43f3-a2d3-9f48e819fbb4 container client-container: STEP: delete the pod May 26 21:32:35.132: INFO: Waiting for pod downwardapi-volume-ce552415-b282-43f3-a2d3-9f48e819fbb4 to disappear May 26 21:32:35.136: INFO: Pod downwardapi-volume-ce552415-b282-43f3-a2d3-9f48e819fbb4 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:32:35.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-5885" for this suite. • [SLOW TEST:6.150 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (memory) as default memory limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":80,"skipped":1280,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:32:35.144: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 26 21:32:35.222: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 26 21:32:35.247: INFO: Waiting for terminating namespaces to be deleted... May 26 21:32:35.250: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 26 21:32:35.256: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 21:32:35.256: INFO: Container kube-proxy ready: true, restart count 0 May 26 21:32:35.256: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 21:32:35.256: INFO: Container kindnet-cni ready: true, restart count 2 May 26 21:32:35.256: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 26 21:32:35.263: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 21:32:35.263: INFO: Container kindnet-cni ready: true, restart count 2 May 26 21:32:35.263: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 26 21:32:35.263: INFO: Container kube-bench ready: false, restart count 0 May 26 21:32:35.263: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 21:32:35.263: INFO: Container kube-proxy ready: true, restart count 0 May 26 21:32:35.263: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 26 21:32:35.263: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-8072c6a0-f053-4d97-98c0-761247b13dee 90 STEP: Trying to create a pod(pod1) with hostport 54321 and hostIP 127.0.0.1 and expect scheduled STEP: Trying to create another pod(pod2) with hostport 54321 but hostIP 127.0.0.2 on the node which pod1 resides and expect scheduled STEP: Trying to create a third pod(pod3) with hostport 54321, hostIP 127.0.0.2 but use UDP protocol on the node which pod2 resides STEP: removing the label kubernetes.io/e2e-8072c6a0-f053-4d97-98c0-761247b13dee off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-8072c6a0-f053-4d97-98c0-761247b13dee [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:32:51.520: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-7446" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:16.384 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there is no conflict between pods with same hostPort but different hostIP and protocol [Conformance]","total":278,"completed":81,"skipped":1316,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:32:51.528: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename prestop STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pre_stop.go:172 [It] should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating server pod server in namespace prestop-6972 STEP: Waiting for pods to come up. STEP: Creating tester pod tester in namespace prestop-6972 STEP: Deleting pre-stop pod May 26 21:33:04.841: INFO: Saw: { "Hostname": "server", "Sent": null, "Received": { "prestop": 1 }, "Errors": null, "Log": [ "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up.", "default/nettest has 0 endpoints ([]), which is less than 8 as expected. Waiting for all endpoints to come up." ], "StillContactingPeers": true } STEP: Deleting the server pod [AfterEach] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:33:04.846: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "prestop-6972" for this suite. • [SLOW TEST:13.352 seconds] [k8s.io] [sig-node] PreStop /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should call prestop when killing a pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] PreStop should call prestop when killing a pod [Conformance]","total":278,"completed":82,"skipped":1325,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:33:04.881: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 26 21:33:04.935: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 26 21:33:04.964: INFO: Waiting for terminating namespaces to be deleted... May 26 21:33:04.967: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 26 21:33:04.972: INFO: server from prestop-6972 started at 2020-05-26 21:32:51 +0000 UTC (1 container statuses recorded) May 26 21:33:04.972: INFO: Container server ready: true, restart count 0 May 26 21:33:04.972: INFO: tester from prestop-6972 started at 2020-05-26 21:32:55 +0000 UTC (1 container statuses recorded) May 26 21:33:04.972: INFO: Container tester ready: true, restart count 0 May 26 21:33:04.972: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 21:33:04.972: INFO: Container kindnet-cni ready: true, restart count 2 May 26 21:33:04.972: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 21:33:04.972: INFO: Container kube-proxy ready: true, restart count 0 May 26 21:33:04.972: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 26 21:33:04.976: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 21:33:04.976: INFO: Container kube-proxy ready: true, restart count 0 May 26 21:33:04.976: INFO: pod1 from sched-pred-7446 started at 2020-05-26 21:32:39 +0000 UTC (1 container statuses recorded) May 26 21:33:04.976: INFO: Container pod1 ready: false, restart count 0 May 26 21:33:04.976: INFO: pod3 from sched-pred-7446 started at 2020-05-26 21:32:47 +0000 UTC (1 container statuses recorded) May 26 21:33:04.976: INFO: Container pod3 ready: false, restart count 0 May 26 21:33:04.976: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 26 21:33:04.976: INFO: Container kube-hunter ready: false, restart count 0 May 26 21:33:04.976: INFO: pod2 from sched-pred-7446 started at 2020-05-26 21:32:43 +0000 UTC (1 container statuses recorded) May 26 21:33:04.976: INFO: Container pod2 ready: false, restart count 0 May 26 21:33:04.976: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 21:33:04.976: INFO: Container kindnet-cni ready: true, restart count 2 May 26 21:33:04.976: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 26 21:33:04.976: INFO: Container kube-bench ready: false, restart count 0 [It] validates that NodeSelector is respected if not matching [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to schedule Pod with nonempty NodeSelector. STEP: Considering event: Type = [Warning], Name = [restricted-pod.1612b180c2819a9d], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] STEP: Considering event: Type = [Warning], Name = [restricted-pod.1612b180cd69f9ad], Reason = [FailedScheduling], Message = [0/3 nodes are available: 3 node(s) didn't match node selector.] [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:33:06.001: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-1786" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 •{"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that NodeSelector is respected if not matching [Conformance]","total":278,"completed":83,"skipped":1346,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:33:06.009: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default command and arguments [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override all May 26 21:33:06.062: INFO: Waiting up to 5m0s for pod "client-containers-17e5d2cb-2ab8-44cd-808f-1e185022b9c5" in namespace "containers-8549" to be "success or failure" May 26 21:33:06.066: INFO: Pod "client-containers-17e5d2cb-2ab8-44cd-808f-1e185022b9c5": Phase="Pending", Reason="", readiness=false. Elapsed: 4.017989ms May 26 21:33:08.071: INFO: Pod "client-containers-17e5d2cb-2ab8-44cd-808f-1e185022b9c5": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008137555s May 26 21:33:10.115: INFO: Pod "client-containers-17e5d2cb-2ab8-44cd-808f-1e185022b9c5": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.052130887s STEP: Saw pod success May 26 21:33:10.115: INFO: Pod "client-containers-17e5d2cb-2ab8-44cd-808f-1e185022b9c5" satisfied condition "success or failure" May 26 21:33:10.120: INFO: Trying to get logs from node jerma-worker pod client-containers-17e5d2cb-2ab8-44cd-808f-1e185022b9c5 container test-container: STEP: delete the pod May 26 21:33:10.161: INFO: Waiting for pod client-containers-17e5d2cb-2ab8-44cd-808f-1e185022b9c5 to disappear May 26 21:33:10.181: INFO: Pod client-containers-17e5d2cb-2ab8-44cd-808f-1e185022b9c5 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:33:10.181: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-8549" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default command and arguments [NodeConformance] [Conformance]","total":278,"completed":84,"skipped":1360,"failed":0} SSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:33:10.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-1f8939b8-5a4f-49cc-9431-0cdfe92659d1 STEP: Creating a pod to test consume configMaps May 26 21:33:10.327: INFO: Waiting up to 5m0s for pod "pod-configmaps-08d6dfb3-2e04-4e3e-81be-f7f573efbfce" in namespace "configmap-9375" to be "success or failure" May 26 21:33:10.372: INFO: Pod "pod-configmaps-08d6dfb3-2e04-4e3e-81be-f7f573efbfce": Phase="Pending", Reason="", readiness=false. Elapsed: 45.037264ms May 26 21:33:12.466: INFO: Pod "pod-configmaps-08d6dfb3-2e04-4e3e-81be-f7f573efbfce": Phase="Pending", Reason="", readiness=false. Elapsed: 2.13918094s May 26 21:33:14.470: INFO: Pod "pod-configmaps-08d6dfb3-2e04-4e3e-81be-f7f573efbfce": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.143165661s STEP: Saw pod success May 26 21:33:14.471: INFO: Pod "pod-configmaps-08d6dfb3-2e04-4e3e-81be-f7f573efbfce" satisfied condition "success or failure" May 26 21:33:14.504: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-08d6dfb3-2e04-4e3e-81be-f7f573efbfce container configmap-volume-test: STEP: delete the pod May 26 21:33:14.564: INFO: Waiting for pod pod-configmaps-08d6dfb3-2e04-4e3e-81be-f7f573efbfce to disappear May 26 21:33:14.628: INFO: Pod pod-configmaps-08d6dfb3-2e04-4e3e-81be-f7f573efbfce no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:33:14.628: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-9375" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":85,"skipped":1366,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:33:14.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-upd-90b3059c-074f-4119-b51b-3df0095eab94 STEP: Creating the pod STEP: Waiting for pod with text data STEP: Waiting for pod with binary data [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:33:21.042: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-7738" for this suite. • [SLOW TEST:6.412 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 binary data should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap binary data should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":86,"skipped":1392,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:33:21.048: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:33:21.165: INFO: Waiting up to 5m0s for pod "busybox-privileged-false-93be166f-ecce-4586-bd4a-6c91fbaa7bec" in namespace "security-context-test-5304" to be "success or failure" May 26 21:33:21.168: INFO: Pod "busybox-privileged-false-93be166f-ecce-4586-bd4a-6c91fbaa7bec": Phase="Pending", Reason="", readiness=false. Elapsed: 3.322552ms May 26 21:33:24.119: INFO: Pod "busybox-privileged-false-93be166f-ecce-4586-bd4a-6c91fbaa7bec": Phase="Pending", Reason="", readiness=false. Elapsed: 2.954598403s May 26 21:33:26.154: INFO: Pod "busybox-privileged-false-93be166f-ecce-4586-bd4a-6c91fbaa7bec": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.989639585s May 26 21:33:26.155: INFO: Pod "busybox-privileged-false-93be166f-ecce-4586-bd4a-6c91fbaa7bec" satisfied condition "success or failure" May 26 21:33:26.163: INFO: Got logs for pod "busybox-privileged-false-93be166f-ecce-4586-bd4a-6c91fbaa7bec": "ip: RTNETLINK answers: Operation not permitted\n" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:33:26.163: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-5304" for this suite. • [SLOW TEST:5.120 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 When creating a pod with privileged /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:225 should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context When creating a pod with privileged should run the container as unprivileged when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":87,"skipped":1433,"failed":0} [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:33:26.168: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-8h76 STEP: Creating a pod to test atomic-volume-subpath May 26 21:33:26.458: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-8h76" in namespace "subpath-6133" to be "success or failure" May 26 21:33:26.689: INFO: Pod "pod-subpath-test-configmap-8h76": Phase="Pending", Reason="", readiness=false. Elapsed: 231.406213ms May 26 21:33:28.694: INFO: Pod "pod-subpath-test-configmap-8h76": Phase="Pending", Reason="", readiness=false. Elapsed: 2.236164559s May 26 21:33:30.699: INFO: Pod "pod-subpath-test-configmap-8h76": Phase="Running", Reason="", readiness=true. Elapsed: 4.241303862s May 26 21:33:32.704: INFO: Pod "pod-subpath-test-configmap-8h76": Phase="Running", Reason="", readiness=true. Elapsed: 6.245872976s May 26 21:33:34.708: INFO: Pod "pod-subpath-test-configmap-8h76": Phase="Running", Reason="", readiness=true. Elapsed: 8.250231473s May 26 21:33:36.713: INFO: Pod "pod-subpath-test-configmap-8h76": Phase="Running", Reason="", readiness=true. Elapsed: 10.2548488s May 26 21:33:38.717: INFO: Pod "pod-subpath-test-configmap-8h76": Phase="Running", Reason="", readiness=true. Elapsed: 12.259338288s May 26 21:33:40.722: INFO: Pod "pod-subpath-test-configmap-8h76": Phase="Running", Reason="", readiness=true. Elapsed: 14.26382624s May 26 21:33:42.726: INFO: Pod "pod-subpath-test-configmap-8h76": Phase="Running", Reason="", readiness=true. Elapsed: 16.268271628s May 26 21:33:44.731: INFO: Pod "pod-subpath-test-configmap-8h76": Phase="Running", Reason="", readiness=true. Elapsed: 18.27282078s May 26 21:33:46.734: INFO: Pod "pod-subpath-test-configmap-8h76": Phase="Running", Reason="", readiness=true. Elapsed: 20.276636792s May 26 21:33:48.739: INFO: Pod "pod-subpath-test-configmap-8h76": Phase="Running", Reason="", readiness=true. Elapsed: 22.281096417s May 26 21:33:50.796: INFO: Pod "pod-subpath-test-configmap-8h76": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.33834851s STEP: Saw pod success May 26 21:33:50.796: INFO: Pod "pod-subpath-test-configmap-8h76" satisfied condition "success or failure" May 26 21:33:50.799: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-configmap-8h76 container test-container-subpath-configmap-8h76: STEP: delete the pod May 26 21:33:50.872: INFO: Waiting for pod pod-subpath-test-configmap-8h76 to disappear May 26 21:33:50.921: INFO: Pod pod-subpath-test-configmap-8h76 no longer exists STEP: Deleting pod pod-subpath-test-configmap-8h76 May 26 21:33:50.921: INFO: Deleting pod "pod-subpath-test-configmap-8h76" in namespace "subpath-6133" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:33:50.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-6133" for this suite. • [SLOW TEST:24.762 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod with mountPath of existing file [LinuxOnly] [Conformance]","total":278,"completed":88,"skipped":1433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:33:50.931: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RecreateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:33:50.980: INFO: Creating deployment "test-recreate-deployment" May 26 21:33:50.996: INFO: Waiting deployment "test-recreate-deployment" to be updated to revision 1 May 26 21:33:51.072: INFO: deployment "test-recreate-deployment" doesn't have the required revision set May 26 21:33:53.081: INFO: Waiting deployment "test-recreate-deployment" to complete May 26 21:33:53.084: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125631, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125631, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125631, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125631, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-recreate-deployment-799c574856\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 21:33:55.088: INFO: Triggering a new rollout for deployment "test-recreate-deployment" May 26 21:33:55.095: INFO: Updating deployment test-recreate-deployment May 26 21:33:55.095: INFO: Watching deployment "test-recreate-deployment" to verify that new pods will not run with olds pods [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 26 21:33:55.719: INFO: Deployment "test-recreate-deployment": &Deployment{ObjectMeta:{test-recreate-deployment deployment-5347 /apis/apps/v1/namespaces/deployment-5347/deployments/test-recreate-deployment ccd56cd1-4163-459d-b432-84edb1426df5 19379706 2 2020-05-26 21:33:50 +0000 UTC map[name:sample-pod-3] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002796f08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:Recreate,RollingUpdate:nil,},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:0,UnavailableReplicas:1,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:False,Reason:MinimumReplicasUnavailable,Message:Deployment does not have minimum availability.,LastUpdateTime:2020-05-26 21:33:55 +0000 UTC,LastTransitionTime:2020-05-26 21:33:55 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:ReplicaSetUpdated,Message:ReplicaSet "test-recreate-deployment-5f94c574ff" is progressing.,LastUpdateTime:2020-05-26 21:33:55 +0000 UTC,LastTransitionTime:2020-05-26 21:33:51 +0000 UTC,},},ReadyReplicas:0,CollisionCount:nil,},} May 26 21:33:55.734: INFO: New ReplicaSet "test-recreate-deployment-5f94c574ff" of Deployment "test-recreate-deployment": &ReplicaSet{ObjectMeta:{test-recreate-deployment-5f94c574ff deployment-5347 /apis/apps/v1/namespaces/deployment-5347/replicasets/test-recreate-deployment-5f94c574ff 4a7512c4-84a7-4e89-8cf5-1e46f064f374 19379704 1 2020-05-26 21:33:55 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-recreate-deployment ccd56cd1-4163-459d-b432-84edb1426df5 0xc002797ce7 0xc002797ce8}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 5f94c574ff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc002797e08 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 26 21:33:55.734: INFO: All old ReplicaSets of Deployment "test-recreate-deployment": May 26 21:33:55.734: INFO: &ReplicaSet{ObjectMeta:{test-recreate-deployment-799c574856 deployment-5347 /apis/apps/v1/namespaces/deployment-5347/replicasets/test-recreate-deployment-799c574856 71d5edbf-84b5-44f1-958b-fe2427307401 19379694 2 2020-05-26 21:33:50 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:1 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-recreate-deployment ccd56cd1-4163-459d-b432-84edb1426df5 0xc002797ed7 0xc002797ed8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod-3,pod-template-hash: 799c574856,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod-3 pod-template-hash:799c574856] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003174028 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 26 21:33:55.739: INFO: Pod "test-recreate-deployment-5f94c574ff-x4xht" is not available: &Pod{ObjectMeta:{test-recreate-deployment-5f94c574ff-x4xht test-recreate-deployment-5f94c574ff- deployment-5347 /api/v1/namespaces/deployment-5347/pods/test-recreate-deployment-5f94c574ff-x4xht 2de16ce3-ef40-4409-8fcd-8028894c938c 19379707 0 2020-05-26 21:33:55 +0000 UTC map[name:sample-pod-3 pod-template-hash:5f94c574ff] map[] [{apps/v1 ReplicaSet test-recreate-deployment-5f94c574ff 4a7512c4-84a7-4e89-8cf5-1e46f064f374 0xc003174477 0xc003174478}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-f9c5h,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-f9c5h,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-f9c5h,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:33:55 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:33:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:ContainersReady,Status:False,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:33:55 +0000 UTC,Reason:ContainersNotReady,Message:containers with unready status: [httpd],},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:33:55 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:,StartTime:2020-05-26 21:33:55 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:&ContainerStateWaiting{Reason:ContainerCreating,Message:,},Running:nil,Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:false,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:,ContainerID:,Started:*false,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:33:55.739: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-5347" for this suite. •{"msg":"PASSED [sig-apps] Deployment RecreateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":89,"skipped":1465,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:33:55.749: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with a certain label STEP: creating a new configmap STEP: modifying the configmap once STEP: changing the label value of the configmap STEP: Expecting to observe a delete notification for the watched object May 26 21:33:56.115: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2593 /api/v1/namespaces/watch-2593/configmaps/e2e-watch-test-label-changed b052e7b1-23b9-47c3-9455-47db5313e8b7 19379717 0 2020-05-26 21:33:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 26 21:33:56.115: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2593 /api/v1/namespaces/watch-2593/configmaps/e2e-watch-test-label-changed b052e7b1-23b9-47c3-9455-47db5313e8b7 19379718 0 2020-05-26 21:33:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 26 21:33:56.115: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2593 /api/v1/namespaces/watch-2593/configmaps/e2e-watch-test-label-changed b052e7b1-23b9-47c3-9455-47db5313e8b7 19379720 0 2020-05-26 21:33:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying the configmap a second time STEP: Expecting not to observe a notification because the object no longer meets the selector's requirements STEP: changing the label value of the configmap back STEP: modifying the configmap a third time STEP: deleting the configmap STEP: Expecting to observe an add notification for the watched object when the label value was restored May 26 21:34:06.148: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2593 /api/v1/namespaces/watch-2593/configmaps/e2e-watch-test-label-changed b052e7b1-23b9-47c3-9455-47db5313e8b7 19379786 0 2020-05-26 21:33:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 26 21:34:06.148: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2593 /api/v1/namespaces/watch-2593/configmaps/e2e-watch-test-label-changed b052e7b1-23b9-47c3-9455-47db5313e8b7 19379787 0 2020-05-26 21:33:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} May 26 21:34:06.148: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-label-changed watch-2593 /api/v1/namespaces/watch-2593/configmaps/e2e-watch-test-label-changed b052e7b1-23b9-47c3-9455-47db5313e8b7 19379788 0 2020-05-26 21:33:56 +0000 UTC map[watch-this-configmap:label-changed-and-restored] map[] [] [] []},Data:map[string]string{mutation: 3,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:34:06.148: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2593" for this suite. • [SLOW TEST:10.422 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe an object deletion if it stops meeting the requirements of the selector [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe an object deletion if it stops meeting the requirements of the selector [Conformance]","total":278,"completed":90,"skipped":1505,"failed":0} S ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:34:06.172: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 26 21:34:06.238: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:34:12.559: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-8189" for this suite. • [SLOW TEST:6.430 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers and fail the pod if init containers fail on a RestartNever pod [Conformance]","total":278,"completed":91,"skipped":1506,"failed":0} SSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:34:12.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 26 21:34:20.828: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 21:34:20.835: INFO: Pod pod-with-poststart-exec-hook still exists May 26 21:34:22.835: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 21:34:22.840: INFO: Pod pod-with-poststart-exec-hook still exists May 26 21:34:24.835: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 21:34:24.840: INFO: Pod pod-with-poststart-exec-hook still exists May 26 21:34:26.835: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 21:34:26.839: INFO: Pod pod-with-poststart-exec-hook still exists May 26 21:34:28.835: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 21:34:28.840: INFO: Pod pod-with-poststart-exec-hook still exists May 26 21:34:30.835: INFO: Waiting for pod pod-with-poststart-exec-hook to disappear May 26 21:34:30.840: INFO: Pod pod-with-poststart-exec-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:34:30.840: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-6980" for this suite. • [SLOW TEST:18.246 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart exec hook properly [NodeConformance] [Conformance]","total":278,"completed":92,"skipped":1510,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:34:30.848: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating secret secrets-5532/secret-test-e86a1a7b-17bf-4d9b-8ddc-5bb70d9d57db STEP: Creating a pod to test consume secrets May 26 21:34:30.976: INFO: Waiting up to 5m0s for pod "pod-configmaps-db747878-09ef-485e-8df6-dc3221a20c66" in namespace "secrets-5532" to be "success or failure" May 26 21:34:30.979: INFO: Pod "pod-configmaps-db747878-09ef-485e-8df6-dc3221a20c66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.578287ms May 26 21:34:32.982: INFO: Pod "pod-configmaps-db747878-09ef-485e-8df6-dc3221a20c66": Phase="Pending", Reason="", readiness=false. Elapsed: 2.0061353s May 26 21:34:34.986: INFO: Pod "pod-configmaps-db747878-09ef-485e-8df6-dc3221a20c66": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.009948409s STEP: Saw pod success May 26 21:34:34.986: INFO: Pod "pod-configmaps-db747878-09ef-485e-8df6-dc3221a20c66" satisfied condition "success or failure" May 26 21:34:34.989: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-db747878-09ef-485e-8df6-dc3221a20c66 container env-test: STEP: delete the pod May 26 21:34:35.119: INFO: Waiting for pod pod-configmaps-db747878-09ef-485e-8df6-dc3221a20c66 to disappear May 26 21:34:35.159: INFO: Pod pod-configmaps-db747878-09ef-485e-8df6-dc3221a20c66 no longer exists [AfterEach] [sig-api-machinery] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:34:35.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-5532" for this suite. •{"msg":"PASSED [sig-api-machinery] Secrets should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":93,"skipped":1539,"failed":0} SSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:34:35.165: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-6846 [It] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating stateful set ss in namespace statefulset-6846 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-6846 May 26 21:34:35.243: INFO: Found 0 stateful pods, waiting for 1 May 26 21:34:45.246: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will not halt with unhealthy stateful pod May 26 21:34:45.248: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6846 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 21:34:48.334: INFO: stderr: "I0526 21:34:48.207480 937 log.go:172] (0xc000cc6bb0) (0xc000722000) Create stream\nI0526 21:34:48.207511 937 log.go:172] (0xc000cc6bb0) (0xc000722000) Stream added, broadcasting: 1\nI0526 21:34:48.209520 937 log.go:172] (0xc000cc6bb0) Reply frame received for 1\nI0526 21:34:48.209565 937 log.go:172] (0xc000cc6bb0) (0xc0007e6000) Create stream\nI0526 21:34:48.209582 937 log.go:172] (0xc000cc6bb0) (0xc0007e6000) Stream added, broadcasting: 3\nI0526 21:34:48.210591 937 log.go:172] (0xc000cc6bb0) Reply frame received for 3\nI0526 21:34:48.210644 937 log.go:172] (0xc000cc6bb0) (0xc000821c20) Create stream\nI0526 21:34:48.210671 937 log.go:172] (0xc000cc6bb0) (0xc000821c20) Stream added, broadcasting: 5\nI0526 21:34:48.211554 937 log.go:172] (0xc000cc6bb0) Reply frame received for 5\nI0526 21:34:48.283169 937 log.go:172] (0xc000cc6bb0) Data frame received for 5\nI0526 21:34:48.283193 937 log.go:172] (0xc000821c20) (5) Data frame handling\nI0526 21:34:48.283208 937 log.go:172] (0xc000821c20) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 21:34:48.321999 937 log.go:172] (0xc000cc6bb0) Data frame received for 3\nI0526 21:34:48.322016 937 log.go:172] (0xc0007e6000) (3) Data frame handling\nI0526 21:34:48.322029 937 log.go:172] (0xc0007e6000) (3) Data frame sent\nI0526 21:34:48.322280 937 log.go:172] (0xc000cc6bb0) Data frame received for 5\nI0526 21:34:48.322326 937 log.go:172] (0xc000821c20) (5) Data frame handling\nI0526 21:34:48.322354 937 log.go:172] (0xc000cc6bb0) Data frame received for 3\nI0526 21:34:48.322368 937 log.go:172] (0xc0007e6000) (3) Data frame handling\nI0526 21:34:48.324485 937 log.go:172] (0xc000cc6bb0) Data frame received for 1\nI0526 21:34:48.324505 937 log.go:172] (0xc000722000) (1) Data frame handling\nI0526 21:34:48.324517 937 log.go:172] (0xc000722000) (1) Data frame sent\nI0526 21:34:48.324539 937 log.go:172] (0xc000cc6bb0) (0xc000722000) Stream removed, broadcasting: 1\nI0526 21:34:48.324652 937 log.go:172] (0xc000cc6bb0) Go away received\nI0526 21:34:48.324843 937 log.go:172] (0xc000cc6bb0) (0xc000722000) Stream removed, broadcasting: 1\nI0526 21:34:48.324858 937 log.go:172] (0xc000cc6bb0) (0xc0007e6000) Stream removed, broadcasting: 3\nI0526 21:34:48.324868 937 log.go:172] (0xc000cc6bb0) (0xc000821c20) Stream removed, broadcasting: 5\n" May 26 21:34:48.334: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 21:34:48.334: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 21:34:48.338: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 26 21:34:58.343: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 26 21:34:58.343: INFO: Waiting for statefulset status.replicas updated to 0 May 26 21:34:58.379: INFO: POD NODE PHASE GRACE CONDITIONS May 26 21:34:58.379: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:48 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC }] May 26 21:34:58.379: INFO: May 26 21:34:58.379: INFO: StatefulSet ss has not reached scale 3, at 1 May 26 21:34:59.384: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.974016917s May 26 21:35:00.705: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.968950568s May 26 21:35:01.739: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.647470239s May 26 21:35:02.804: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.614314923s May 26 21:35:03.809: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.549080166s May 26 21:35:04.814: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.543579072s May 26 21:35:05.820: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.538784706s May 26 21:35:06.827: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.533137633s May 26 21:35:07.833: INFO: Verifying statefulset ss doesn't scale past 3 for another 525.67715ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-6846 May 26 21:35:08.838: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6846 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 21:35:09.062: INFO: stderr: "I0526 21:35:08.962680 965 log.go:172] (0xc000a04840) (0xc000aa2000) Create stream\nI0526 21:35:08.962745 965 log.go:172] (0xc000a04840) (0xc000aa2000) Stream added, broadcasting: 1\nI0526 21:35:08.965873 965 log.go:172] (0xc000a04840) Reply frame received for 1\nI0526 21:35:08.965917 965 log.go:172] (0xc000a04840) (0xc00061ba40) Create stream\nI0526 21:35:08.965931 965 log.go:172] (0xc000a04840) (0xc00061ba40) Stream added, broadcasting: 3\nI0526 21:35:08.967158 965 log.go:172] (0xc000a04840) Reply frame received for 3\nI0526 21:35:08.967208 965 log.go:172] (0xc000a04840) (0xc000aa20a0) Create stream\nI0526 21:35:08.967227 965 log.go:172] (0xc000a04840) (0xc000aa20a0) Stream added, broadcasting: 5\nI0526 21:35:08.968076 965 log.go:172] (0xc000a04840) Reply frame received for 5\nI0526 21:35:09.053885 965 log.go:172] (0xc000a04840) Data frame received for 5\nI0526 21:35:09.053947 965 log.go:172] (0xc000aa20a0) (5) Data frame handling\nI0526 21:35:09.053968 965 log.go:172] (0xc000aa20a0) (5) Data frame sent\nI0526 21:35:09.053980 965 log.go:172] (0xc000a04840) Data frame received for 5\nI0526 21:35:09.053991 965 log.go:172] (0xc000aa20a0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0526 21:35:09.054036 965 log.go:172] (0xc000a04840) Data frame received for 3\nI0526 21:35:09.054063 965 log.go:172] (0xc00061ba40) (3) Data frame handling\nI0526 21:35:09.054086 965 log.go:172] (0xc00061ba40) (3) Data frame sent\nI0526 21:35:09.054100 965 log.go:172] (0xc000a04840) Data frame received for 3\nI0526 21:35:09.054111 965 log.go:172] (0xc00061ba40) (3) Data frame handling\nI0526 21:35:09.055356 965 log.go:172] (0xc000a04840) Data frame received for 1\nI0526 21:35:09.055387 965 log.go:172] (0xc000aa2000) (1) Data frame handling\nI0526 21:35:09.055406 965 log.go:172] (0xc000aa2000) (1) Data frame sent\nI0526 21:35:09.055419 965 log.go:172] (0xc000a04840) (0xc000aa2000) Stream removed, broadcasting: 1\nI0526 21:35:09.055436 965 log.go:172] (0xc000a04840) Go away received\nI0526 21:35:09.055919 965 log.go:172] (0xc000a04840) (0xc000aa2000) Stream removed, broadcasting: 1\nI0526 21:35:09.055948 965 log.go:172] (0xc000a04840) (0xc00061ba40) Stream removed, broadcasting: 3\nI0526 21:35:09.055962 965 log.go:172] (0xc000a04840) (0xc000aa20a0) Stream removed, broadcasting: 5\n" May 26 21:35:09.062: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 21:35:09.062: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 26 21:35:09.062: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6846 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 21:35:09.264: INFO: stderr: "I0526 21:35:09.188620 987 log.go:172] (0xc000105290) (0xc000655a40) Create stream\nI0526 21:35:09.188678 987 log.go:172] (0xc000105290) (0xc000655a40) Stream added, broadcasting: 1\nI0526 21:35:09.190634 987 log.go:172] (0xc000105290) Reply frame received for 1\nI0526 21:35:09.190657 987 log.go:172] (0xc000105290) (0xc000a20000) Create stream\nI0526 21:35:09.190664 987 log.go:172] (0xc000105290) (0xc000a20000) Stream added, broadcasting: 3\nI0526 21:35:09.191325 987 log.go:172] (0xc000105290) Reply frame received for 3\nI0526 21:35:09.191368 987 log.go:172] (0xc000105290) (0xc000655c20) Create stream\nI0526 21:35:09.191382 987 log.go:172] (0xc000105290) (0xc000655c20) Stream added, broadcasting: 5\nI0526 21:35:09.192044 987 log.go:172] (0xc000105290) Reply frame received for 5\nI0526 21:35:09.239160 987 log.go:172] (0xc000105290) Data frame received for 5\nI0526 21:35:09.239184 987 log.go:172] (0xc000655c20) (5) Data frame handling\nI0526 21:35:09.239200 987 log.go:172] (0xc000655c20) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0526 21:35:09.257959 987 log.go:172] (0xc000105290) Data frame received for 3\nI0526 21:35:09.257997 987 log.go:172] (0xc000a20000) (3) Data frame handling\nI0526 21:35:09.258021 987 log.go:172] (0xc000105290) Data frame received for 5\nI0526 21:35:09.258042 987 log.go:172] (0xc000655c20) (5) Data frame handling\nI0526 21:35:09.258054 987 log.go:172] (0xc000655c20) (5) Data frame sent\nI0526 21:35:09.258065 987 log.go:172] (0xc000105290) Data frame received for 5\nmv: can't rename '/tmp/index.html': No such file or directory\nI0526 21:35:09.258074 987 log.go:172] (0xc000655c20) (5) Data frame handling\n+ true\nI0526 21:35:09.258088 987 log.go:172] (0xc000a20000) (3) Data frame sent\nI0526 21:35:09.258098 987 log.go:172] (0xc000105290) Data frame received for 3\nI0526 21:35:09.258113 987 log.go:172] (0xc000a20000) (3) Data frame handling\nI0526 21:35:09.258129 987 log.go:172] (0xc000655c20) (5) Data frame sent\nI0526 21:35:09.258138 987 log.go:172] (0xc000105290) Data frame received for 5\nI0526 21:35:09.258143 987 log.go:172] (0xc000655c20) (5) Data frame handling\nI0526 21:35:09.259702 987 log.go:172] (0xc000105290) Data frame received for 1\nI0526 21:35:09.259713 987 log.go:172] (0xc000655a40) (1) Data frame handling\nI0526 21:35:09.259719 987 log.go:172] (0xc000655a40) (1) Data frame sent\nI0526 21:35:09.259735 987 log.go:172] (0xc000105290) (0xc000655a40) Stream removed, broadcasting: 1\nI0526 21:35:09.259808 987 log.go:172] (0xc000105290) Go away received\nI0526 21:35:09.260024 987 log.go:172] (0xc000105290) (0xc000655a40) Stream removed, broadcasting: 1\nI0526 21:35:09.260039 987 log.go:172] (0xc000105290) (0xc000a20000) Stream removed, broadcasting: 3\nI0526 21:35:09.260054 987 log.go:172] (0xc000105290) (0xc000655c20) Stream removed, broadcasting: 5\n" May 26 21:35:09.264: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 21:35:09.264: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 26 21:35:09.264: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6846 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 21:35:09.474: INFO: stderr: "I0526 21:35:09.385768 1008 log.go:172] (0xc00098aa50) (0xc0007581e0) Create stream\nI0526 21:35:09.385826 1008 log.go:172] (0xc00098aa50) (0xc0007581e0) Stream added, broadcasting: 1\nI0526 21:35:09.388319 1008 log.go:172] (0xc00098aa50) Reply frame received for 1\nI0526 21:35:09.388353 1008 log.go:172] (0xc00098aa50) (0xc000659b80) Create stream\nI0526 21:35:09.388363 1008 log.go:172] (0xc00098aa50) (0xc000659b80) Stream added, broadcasting: 3\nI0526 21:35:09.389683 1008 log.go:172] (0xc00098aa50) Reply frame received for 3\nI0526 21:35:09.389743 1008 log.go:172] (0xc00098aa50) (0xc000ae8000) Create stream\nI0526 21:35:09.389766 1008 log.go:172] (0xc00098aa50) (0xc000ae8000) Stream added, broadcasting: 5\nI0526 21:35:09.390844 1008 log.go:172] (0xc00098aa50) Reply frame received for 5\nI0526 21:35:09.466755 1008 log.go:172] (0xc00098aa50) Data frame received for 3\nI0526 21:35:09.466898 1008 log.go:172] (0xc000659b80) (3) Data frame handling\nI0526 21:35:09.467038 1008 log.go:172] (0xc00098aa50) Data frame received for 5\nI0526 21:35:09.467079 1008 log.go:172] (0xc000ae8000) (5) Data frame handling\nI0526 21:35:09.467090 1008 log.go:172] (0xc000ae8000) (5) Data frame sent\nI0526 21:35:09.467099 1008 log.go:172] (0xc00098aa50) Data frame received for 5\nI0526 21:35:09.467105 1008 log.go:172] (0xc000ae8000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nmv: can't rename '/tmp/index.html': No such file or directory\n+ true\nI0526 21:35:09.467129 1008 log.go:172] (0xc000659b80) (3) Data frame sent\nI0526 21:35:09.467144 1008 log.go:172] (0xc00098aa50) Data frame received for 3\nI0526 21:35:09.467151 1008 log.go:172] (0xc000659b80) (3) Data frame handling\nI0526 21:35:09.468840 1008 log.go:172] (0xc00098aa50) Data frame received for 1\nI0526 21:35:09.468872 1008 log.go:172] (0xc0007581e0) (1) Data frame handling\nI0526 21:35:09.468892 1008 log.go:172] (0xc0007581e0) (1) Data frame sent\nI0526 21:35:09.468909 1008 log.go:172] (0xc00098aa50) (0xc0007581e0) Stream removed, broadcasting: 1\nI0526 21:35:09.468930 1008 log.go:172] (0xc00098aa50) Go away received\nI0526 21:35:09.469623 1008 log.go:172] (0xc00098aa50) (0xc0007581e0) Stream removed, broadcasting: 1\nI0526 21:35:09.469663 1008 log.go:172] (0xc00098aa50) (0xc000659b80) Stream removed, broadcasting: 3\nI0526 21:35:09.469689 1008 log.go:172] (0xc00098aa50) (0xc000ae8000) Stream removed, broadcasting: 5\n" May 26 21:35:09.474: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 21:35:09.474: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 26 21:35:09.479: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=false May 26 21:35:19.484: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 26 21:35:19.484: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 26 21:35:19.484: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Scale down will not halt with unhealthy stateful pod May 26 21:35:19.488: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6846 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 21:35:19.729: INFO: stderr: "I0526 21:35:19.632116 1031 log.go:172] (0xc000a6c000) (0xc00071bae0) Create stream\nI0526 21:35:19.632196 1031 log.go:172] (0xc000a6c000) (0xc00071bae0) Stream added, broadcasting: 1\nI0526 21:35:19.634812 1031 log.go:172] (0xc000a6c000) Reply frame received for 1\nI0526 21:35:19.634864 1031 log.go:172] (0xc000a6c000) (0xc00071bcc0) Create stream\nI0526 21:35:19.634877 1031 log.go:172] (0xc000a6c000) (0xc00071bcc0) Stream added, broadcasting: 3\nI0526 21:35:19.636032 1031 log.go:172] (0xc000a6c000) Reply frame received for 3\nI0526 21:35:19.636080 1031 log.go:172] (0xc000a6c000) (0xc0009ea000) Create stream\nI0526 21:35:19.636101 1031 log.go:172] (0xc000a6c000) (0xc0009ea000) Stream added, broadcasting: 5\nI0526 21:35:19.637396 1031 log.go:172] (0xc000a6c000) Reply frame received for 5\nI0526 21:35:19.724461 1031 log.go:172] (0xc000a6c000) Data frame received for 5\nI0526 21:35:19.724491 1031 log.go:172] (0xc0009ea000) (5) Data frame handling\nI0526 21:35:19.724505 1031 log.go:172] (0xc0009ea000) (5) Data frame sent\nI0526 21:35:19.724515 1031 log.go:172] (0xc000a6c000) Data frame received for 5\nI0526 21:35:19.724521 1031 log.go:172] (0xc0009ea000) (5) Data frame handling\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 21:35:19.724542 1031 log.go:172] (0xc000a6c000) Data frame received for 3\nI0526 21:35:19.724552 1031 log.go:172] (0xc00071bcc0) (3) Data frame handling\nI0526 21:35:19.724566 1031 log.go:172] (0xc00071bcc0) (3) Data frame sent\nI0526 21:35:19.724576 1031 log.go:172] (0xc000a6c000) Data frame received for 3\nI0526 21:35:19.724583 1031 log.go:172] (0xc00071bcc0) (3) Data frame handling\nI0526 21:35:19.725950 1031 log.go:172] (0xc000a6c000) Data frame received for 1\nI0526 21:35:19.725968 1031 log.go:172] (0xc00071bae0) (1) Data frame handling\nI0526 21:35:19.725982 1031 log.go:172] (0xc00071bae0) (1) Data frame sent\nI0526 21:35:19.725993 1031 log.go:172] (0xc000a6c000) (0xc00071bae0) Stream removed, broadcasting: 1\nI0526 21:35:19.726001 1031 log.go:172] (0xc000a6c000) Go away received\nI0526 21:35:19.726275 1031 log.go:172] (0xc000a6c000) (0xc00071bae0) Stream removed, broadcasting: 1\nI0526 21:35:19.726299 1031 log.go:172] (0xc000a6c000) (0xc00071bcc0) Stream removed, broadcasting: 3\nI0526 21:35:19.726307 1031 log.go:172] (0xc000a6c000) (0xc0009ea000) Stream removed, broadcasting: 5\n" May 26 21:35:19.729: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 21:35:19.729: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 21:35:19.729: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6846 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 21:35:19.966: INFO: stderr: "I0526 21:35:19.859435 1053 log.go:172] (0xc000658dc0) (0xc0007ba140) Create stream\nI0526 21:35:19.859488 1053 log.go:172] (0xc000658dc0) (0xc0007ba140) Stream added, broadcasting: 1\nI0526 21:35:19.862132 1053 log.go:172] (0xc000658dc0) Reply frame received for 1\nI0526 21:35:19.862170 1053 log.go:172] (0xc000658dc0) (0xc000980000) Create stream\nI0526 21:35:19.862181 1053 log.go:172] (0xc000658dc0) (0xc000980000) Stream added, broadcasting: 3\nI0526 21:35:19.863220 1053 log.go:172] (0xc000658dc0) Reply frame received for 3\nI0526 21:35:19.863265 1053 log.go:172] (0xc000658dc0) (0xc0009800a0) Create stream\nI0526 21:35:19.863284 1053 log.go:172] (0xc000658dc0) (0xc0009800a0) Stream added, broadcasting: 5\nI0526 21:35:19.865482 1053 log.go:172] (0xc000658dc0) Reply frame received for 5\nI0526 21:35:19.925727 1053 log.go:172] (0xc000658dc0) Data frame received for 5\nI0526 21:35:19.925750 1053 log.go:172] (0xc0009800a0) (5) Data frame handling\nI0526 21:35:19.925765 1053 log.go:172] (0xc0009800a0) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 21:35:19.958096 1053 log.go:172] (0xc000658dc0) Data frame received for 3\nI0526 21:35:19.958127 1053 log.go:172] (0xc000980000) (3) Data frame handling\nI0526 21:35:19.958156 1053 log.go:172] (0xc000980000) (3) Data frame sent\nI0526 21:35:19.958175 1053 log.go:172] (0xc000658dc0) Data frame received for 3\nI0526 21:35:19.958192 1053 log.go:172] (0xc000980000) (3) Data frame handling\nI0526 21:35:19.958390 1053 log.go:172] (0xc000658dc0) Data frame received for 5\nI0526 21:35:19.958406 1053 log.go:172] (0xc0009800a0) (5) Data frame handling\nI0526 21:35:19.960032 1053 log.go:172] (0xc000658dc0) Data frame received for 1\nI0526 21:35:19.960048 1053 log.go:172] (0xc0007ba140) (1) Data frame handling\nI0526 21:35:19.960074 1053 log.go:172] (0xc0007ba140) (1) Data frame sent\nI0526 21:35:19.960200 1053 log.go:172] (0xc000658dc0) (0xc0007ba140) Stream removed, broadcasting: 1\nI0526 21:35:19.960438 1053 log.go:172] (0xc000658dc0) Go away received\nI0526 21:35:19.960493 1053 log.go:172] (0xc000658dc0) (0xc0007ba140) Stream removed, broadcasting: 1\nI0526 21:35:19.960512 1053 log.go:172] (0xc000658dc0) (0xc000980000) Stream removed, broadcasting: 3\nI0526 21:35:19.960526 1053 log.go:172] (0xc000658dc0) (0xc0009800a0) Stream removed, broadcasting: 5\n" May 26 21:35:19.966: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 21:35:19.966: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 21:35:19.966: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-6846 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 21:35:20.219: INFO: stderr: "I0526 21:35:20.101819 1076 log.go:172] (0xc000a006e0) (0xc0006cbb80) Create stream\nI0526 21:35:20.101886 1076 log.go:172] (0xc000a006e0) (0xc0006cbb80) Stream added, broadcasting: 1\nI0526 21:35:20.104533 1076 log.go:172] (0xc000a006e0) Reply frame received for 1\nI0526 21:35:20.104575 1076 log.go:172] (0xc000a006e0) (0xc000560000) Create stream\nI0526 21:35:20.104587 1076 log.go:172] (0xc000a006e0) (0xc000560000) Stream added, broadcasting: 3\nI0526 21:35:20.105971 1076 log.go:172] (0xc000a006e0) Reply frame received for 3\nI0526 21:35:20.106038 1076 log.go:172] (0xc000a006e0) (0xc000022000) Create stream\nI0526 21:35:20.106061 1076 log.go:172] (0xc000a006e0) (0xc000022000) Stream added, broadcasting: 5\nI0526 21:35:20.107028 1076 log.go:172] (0xc000a006e0) Reply frame received for 5\nI0526 21:35:20.182649 1076 log.go:172] (0xc000a006e0) Data frame received for 5\nI0526 21:35:20.182688 1076 log.go:172] (0xc000022000) (5) Data frame handling\nI0526 21:35:20.182703 1076 log.go:172] (0xc000022000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 21:35:20.211649 1076 log.go:172] (0xc000a006e0) Data frame received for 3\nI0526 21:35:20.211671 1076 log.go:172] (0xc000560000) (3) Data frame handling\nI0526 21:35:20.211693 1076 log.go:172] (0xc000560000) (3) Data frame sent\nI0526 21:35:20.211942 1076 log.go:172] (0xc000a006e0) Data frame received for 3\nI0526 21:35:20.211968 1076 log.go:172] (0xc000560000) (3) Data frame handling\nI0526 21:35:20.212094 1076 log.go:172] (0xc000a006e0) Data frame received for 5\nI0526 21:35:20.212106 1076 log.go:172] (0xc000022000) (5) Data frame handling\nI0526 21:35:20.214303 1076 log.go:172] (0xc000a006e0) Data frame received for 1\nI0526 21:35:20.214322 1076 log.go:172] (0xc0006cbb80) (1) Data frame handling\nI0526 21:35:20.214329 1076 log.go:172] (0xc0006cbb80) (1) Data frame sent\nI0526 21:35:20.214338 1076 log.go:172] (0xc000a006e0) (0xc0006cbb80) Stream removed, broadcasting: 1\nI0526 21:35:20.214377 1076 log.go:172] (0xc000a006e0) Go away received\nI0526 21:35:20.214643 1076 log.go:172] (0xc000a006e0) (0xc0006cbb80) Stream removed, broadcasting: 1\nI0526 21:35:20.214656 1076 log.go:172] (0xc000a006e0) (0xc000560000) Stream removed, broadcasting: 3\nI0526 21:35:20.214663 1076 log.go:172] (0xc000a006e0) (0xc000022000) Stream removed, broadcasting: 5\n" May 26 21:35:20.219: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 21:35:20.219: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 21:35:20.219: INFO: Waiting for statefulset status.replicas updated to 0 May 26 21:35:20.223: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 26 21:35:30.232: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 26 21:35:30.232: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 26 21:35:30.232: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 26 21:35:30.265: INFO: POD NODE PHASE GRACE CONDITIONS May 26 21:35:30.265: INFO: ss-0 jerma-worker Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC }] May 26 21:35:30.265: INFO: ss-1 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:30.265: INFO: ss-2 jerma-worker2 Running [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:30.265: INFO: May 26 21:35:30.265: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 21:35:31.412: INFO: POD NODE PHASE GRACE CONDITIONS May 26 21:35:31.412: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC }] May 26 21:35:31.412: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:31.412: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:31.412: INFO: May 26 21:35:31.412: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 21:35:32.418: INFO: POD NODE PHASE GRACE CONDITIONS May 26 21:35:32.418: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC }] May 26 21:35:32.418: INFO: ss-1 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:32.418: INFO: ss-2 jerma-worker2 Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:32.418: INFO: May 26 21:35:32.418: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 21:35:33.423: INFO: POD NODE PHASE GRACE CONDITIONS May 26 21:35:33.423: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC }] May 26 21:35:33.423: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:33.423: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:33.423: INFO: May 26 21:35:33.423: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 21:35:34.429: INFO: POD NODE PHASE GRACE CONDITIONS May 26 21:35:34.429: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC }] May 26 21:35:34.429: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:34.429: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:34.429: INFO: May 26 21:35:34.429: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 21:35:35.435: INFO: POD NODE PHASE GRACE CONDITIONS May 26 21:35:35.435: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC }] May 26 21:35:35.435: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:35.435: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:35.435: INFO: May 26 21:35:35.435: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 21:35:36.440: INFO: POD NODE PHASE GRACE CONDITIONS May 26 21:35:36.440: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC }] May 26 21:35:36.440: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:36.440: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:36.440: INFO: May 26 21:35:36.440: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 21:35:37.445: INFO: POD NODE PHASE GRACE CONDITIONS May 26 21:35:37.446: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC }] May 26 21:35:37.446: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:37.446: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:37.446: INFO: May 26 21:35:37.446: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 21:35:38.456: INFO: POD NODE PHASE GRACE CONDITIONS May 26 21:35:38.456: INFO: ss-0 jerma-worker Running 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:19 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:35 +0000 UTC }] May 26 21:35:38.456: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:38.456: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:38.456: INFO: May 26 21:35:38.456: INFO: StatefulSet ss has not reached scale 0, at 3 May 26 21:35:39.461: INFO: POD NODE PHASE GRACE CONDITIONS May 26 21:35:39.461: INFO: ss-1 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:39.461: INFO: ss-2 jerma-worker2 Pending 30s [{Initialized True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC } {Ready False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {ContainersReady False 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:35:20 +0000 UTC ContainersNotReady containers with unready status: [webserver]} {PodScheduled True 0001-01-01 00:00:00 +0000 UTC 2020-05-26 21:34:58 +0000 UTC }] May 26 21:35:39.461: INFO: May 26 21:35:39.461: INFO: StatefulSet ss has not reached scale 0, at 2 STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-6846 May 26 21:35:40.465: INFO: Scaling statefulset ss to 0 May 26 21:35:40.474: INFO: Waiting for statefulset status.replicas updated to 0 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 26 21:35:40.476: INFO: Deleting all statefulset in ns statefulset-6846 May 26 21:35:40.478: INFO: Scaling statefulset ss to 0 May 26 21:35:40.485: INFO: Waiting for statefulset status.replicas updated to 0 May 26 21:35:40.487: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:35:40.518: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-6846" for this suite. • [SLOW TEST:65.383 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Burst scaling should run to completion even with unhealthy pods [Slow] [Conformance]","total":278,"completed":94,"skipped":1543,"failed":0} S ------------------------------ [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:35:40.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1141 A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1141;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1141 A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1141;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1141.svc A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-1141.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1141.svc A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-1141.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1141.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-1141.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1141.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-1141.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1141.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-1141.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1141.svc SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-1141.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1141.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 10.96.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.96.10_udp@PTR;check="$$(dig +tcp +noall +answer +search 10.96.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.96.10_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service;check="$$(dig +tcp +noall +answer +search dns-test-service A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1141 A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1141;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1141 A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1141;check="$$(dig +notcp +noall +answer +search dns-test-service.dns-1141.svc A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-1141.svc;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-1141.svc A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-1141.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-1141.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-1141.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-1141.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-1141.svc;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-1141.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-1141.svc;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-1141.svc SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-1141.svc;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-1141.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 10.96.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.96.10_udp@PTR;check="$$(dig +tcp +noall +answer +search 10.96.110.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.110.96.10_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 26 21:35:46.700: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:46.704: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:46.707: INFO: Unable to read wheezy_udp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:46.711: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:46.714: INFO: Unable to read wheezy_udp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:46.717: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:46.720: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:46.723: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:46.744: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:46.748: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:46.751: INFO: Unable to read jessie_udp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:46.754: INFO: Unable to read jessie_tcp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:46.757: INFO: Unable to read jessie_udp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:46.760: INFO: Unable to read jessie_tcp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:46.764: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:46.767: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:46.786: INFO: Lookups using dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1141 wheezy_tcp@dns-test-service.dns-1141 wheezy_udp@dns-test-service.dns-1141.svc wheezy_tcp@dns-test-service.dns-1141.svc wheezy_udp@_http._tcp.dns-test-service.dns-1141.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1141.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1141 jessie_tcp@dns-test-service.dns-1141 jessie_udp@dns-test-service.dns-1141.svc jessie_tcp@dns-test-service.dns-1141.svc jessie_udp@_http._tcp.dns-test-service.dns-1141.svc jessie_tcp@_http._tcp.dns-test-service.dns-1141.svc] May 26 21:35:51.792: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:51.796: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:51.799: INFO: Unable to read wheezy_udp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:51.803: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:51.806: INFO: Unable to read wheezy_udp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:51.810: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:51.813: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:51.816: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:51.838: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:51.841: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:51.845: INFO: Unable to read jessie_udp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:51.848: INFO: Unable to read jessie_tcp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:51.850: INFO: Unable to read jessie_udp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:51.854: INFO: Unable to read jessie_tcp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:51.856: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:51.859: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:51.880: INFO: Lookups using dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1141 wheezy_tcp@dns-test-service.dns-1141 wheezy_udp@dns-test-service.dns-1141.svc wheezy_tcp@dns-test-service.dns-1141.svc wheezy_udp@_http._tcp.dns-test-service.dns-1141.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1141.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1141 jessie_tcp@dns-test-service.dns-1141 jessie_udp@dns-test-service.dns-1141.svc jessie_tcp@dns-test-service.dns-1141.svc jessie_udp@_http._tcp.dns-test-service.dns-1141.svc jessie_tcp@_http._tcp.dns-test-service.dns-1141.svc] May 26 21:35:56.791: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:56.795: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:56.799: INFO: Unable to read wheezy_udp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:56.802: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:56.838: INFO: Unable to read wheezy_udp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:56.841: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:56.844: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:56.846: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:56.862: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:56.864: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:56.867: INFO: Unable to read jessie_udp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:56.870: INFO: Unable to read jessie_tcp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:56.872: INFO: Unable to read jessie_udp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:56.875: INFO: Unable to read jessie_tcp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:56.878: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:56.881: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:35:56.900: INFO: Lookups using dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1141 wheezy_tcp@dns-test-service.dns-1141 wheezy_udp@dns-test-service.dns-1141.svc wheezy_tcp@dns-test-service.dns-1141.svc wheezy_udp@_http._tcp.dns-test-service.dns-1141.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1141.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1141 jessie_tcp@dns-test-service.dns-1141 jessie_udp@dns-test-service.dns-1141.svc jessie_tcp@dns-test-service.dns-1141.svc jessie_udp@_http._tcp.dns-test-service.dns-1141.svc jessie_tcp@_http._tcp.dns-test-service.dns-1141.svc] May 26 21:36:01.816: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:01.820: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:01.823: INFO: Unable to read wheezy_udp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:01.826: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:01.828: INFO: Unable to read wheezy_udp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:01.831: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:01.834: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:01.837: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:01.856: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:01.858: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:01.861: INFO: Unable to read jessie_udp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:01.863: INFO: Unable to read jessie_tcp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:01.866: INFO: Unable to read jessie_udp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:01.868: INFO: Unable to read jessie_tcp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:01.871: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:01.873: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:01.895: INFO: Lookups using dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1141 wheezy_tcp@dns-test-service.dns-1141 wheezy_udp@dns-test-service.dns-1141.svc wheezy_tcp@dns-test-service.dns-1141.svc wheezy_udp@_http._tcp.dns-test-service.dns-1141.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1141.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1141 jessie_tcp@dns-test-service.dns-1141 jessie_udp@dns-test-service.dns-1141.svc jessie_tcp@dns-test-service.dns-1141.svc jessie_udp@_http._tcp.dns-test-service.dns-1141.svc jessie_tcp@_http._tcp.dns-test-service.dns-1141.svc] May 26 21:36:06.791: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:06.795: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:06.798: INFO: Unable to read wheezy_udp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:06.801: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:06.804: INFO: Unable to read wheezy_udp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:06.806: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:06.808: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:06.810: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:06.827: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:06.829: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:06.831: INFO: Unable to read jessie_udp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:06.834: INFO: Unable to read jessie_tcp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:06.836: INFO: Unable to read jessie_udp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:06.851: INFO: Unable to read jessie_tcp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:06.875: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:06.878: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:06.947: INFO: Lookups using dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1141 wheezy_tcp@dns-test-service.dns-1141 wheezy_udp@dns-test-service.dns-1141.svc wheezy_tcp@dns-test-service.dns-1141.svc wheezy_udp@_http._tcp.dns-test-service.dns-1141.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1141.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1141 jessie_tcp@dns-test-service.dns-1141 jessie_udp@dns-test-service.dns-1141.svc jessie_tcp@dns-test-service.dns-1141.svc jessie_udp@_http._tcp.dns-test-service.dns-1141.svc jessie_tcp@_http._tcp.dns-test-service.dns-1141.svc] May 26 21:36:11.798: INFO: Unable to read wheezy_udp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:11.801: INFO: Unable to read wheezy_tcp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:11.804: INFO: Unable to read wheezy_udp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:11.807: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:11.811: INFO: Unable to read wheezy_udp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:11.814: INFO: Unable to read wheezy_tcp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:11.816: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:11.819: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:11.844: INFO: Unable to read jessie_udp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:11.846: INFO: Unable to read jessie_tcp@dns-test-service from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:11.849: INFO: Unable to read jessie_udp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:11.851: INFO: Unable to read jessie_tcp@dns-test-service.dns-1141 from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:11.854: INFO: Unable to read jessie_udp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:11.856: INFO: Unable to read jessie_tcp@dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:11.859: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:11.862: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-1141.svc from pod dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec: the server could not find the requested resource (get pods dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec) May 26 21:36:11.905: INFO: Lookups using dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec failed for: [wheezy_udp@dns-test-service wheezy_tcp@dns-test-service wheezy_udp@dns-test-service.dns-1141 wheezy_tcp@dns-test-service.dns-1141 wheezy_udp@dns-test-service.dns-1141.svc wheezy_tcp@dns-test-service.dns-1141.svc wheezy_udp@_http._tcp.dns-test-service.dns-1141.svc wheezy_tcp@_http._tcp.dns-test-service.dns-1141.svc jessie_udp@dns-test-service jessie_tcp@dns-test-service jessie_udp@dns-test-service.dns-1141 jessie_tcp@dns-test-service.dns-1141 jessie_udp@dns-test-service.dns-1141.svc jessie_tcp@dns-test-service.dns-1141.svc jessie_udp@_http._tcp.dns-test-service.dns-1141.svc jessie_tcp@_http._tcp.dns-test-service.dns-1141.svc] May 26 21:36:16.867: INFO: DNS probes using dns-1141/dns-test-57ee75c2-f7e3-4bb3-bdf7-962ee5dad9ec succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:36:17.609: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-1141" for this suite. • [SLOW TEST:37.130 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should resolve DNS of partial qualified names for services [LinuxOnly] [Conformance]","total":278,"completed":95,"skipped":1544,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:36:17.679: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 21:36:18.327: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 21:36:20.338: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125778, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125778, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125778, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125778, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 21:36:22.342: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125778, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125778, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125778, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726125778, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 21:36:25.403: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:36:25.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-6115-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource that should be mutated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:36:26.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-532" for this suite. STEP: Destroying namespace "webhook-532-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:9.067 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with pruning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with pruning [Conformance]","total":278,"completed":96,"skipped":1579,"failed":0} SSSSS ------------------------------ [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:36:26.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota with terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a ResourceQuota with not terminating scope STEP: Ensuring ResourceQuota status is calculated STEP: Creating a long running pod STEP: Ensuring resource quota with not terminating scope captures the pod usage STEP: Ensuring resource quota with terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage STEP: Creating a terminating pod STEP: Ensuring resource quota with terminating scope captures the pod usage STEP: Ensuring resource quota with not terminating scope ignored the pod usage STEP: Deleting the pod STEP: Ensuring resource quota status released the pod usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:36:43.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4991" for this suite. • [SLOW TEST:16.354 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should verify ResourceQuota with terminating scopes. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should verify ResourceQuota with terminating scopes. [Conformance]","total":278,"completed":97,"skipped":1584,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:36:43.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create a job from an image, then delete the job [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: executing a command with run --rm and attach with stdin May 26 21:36:43.147: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=kubectl-1543 run e2e-test-rm-busybox-job --image=docker.io/library/busybox:1.29 --rm=true --generator=job/v1 --restart=OnFailure --attach=true --stdin -- sh -c cat && echo 'stdin closed'' May 26 21:36:45.972: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\nIf you don't see a command prompt, try pressing enter.\nI0526 21:36:45.902390 1097 log.go:172] (0xc000b4e0b0) (0xc000b940a0) Create stream\nI0526 21:36:45.902435 1097 log.go:172] (0xc000b4e0b0) (0xc000b940a0) Stream added, broadcasting: 1\nI0526 21:36:45.904795 1097 log.go:172] (0xc000b4e0b0) Reply frame received for 1\nI0526 21:36:45.904828 1097 log.go:172] (0xc000b4e0b0) (0xc000679900) Create stream\nI0526 21:36:45.904840 1097 log.go:172] (0xc000b4e0b0) (0xc000679900) Stream added, broadcasting: 3\nI0526 21:36:45.905912 1097 log.go:172] (0xc000b4e0b0) Reply frame received for 3\nI0526 21:36:45.905965 1097 log.go:172] (0xc000b4e0b0) (0xc000b941e0) Create stream\nI0526 21:36:45.905980 1097 log.go:172] (0xc000b4e0b0) (0xc000b941e0) Stream added, broadcasting: 5\nI0526 21:36:45.907190 1097 log.go:172] (0xc000b4e0b0) Reply frame received for 5\nI0526 21:36:45.907222 1097 log.go:172] (0xc000b4e0b0) (0xc000b94320) Create stream\nI0526 21:36:45.907231 1097 log.go:172] (0xc000b4e0b0) (0xc000b94320) Stream added, broadcasting: 7\nI0526 21:36:45.908147 1097 log.go:172] (0xc000b4e0b0) Reply frame received for 7\nI0526 21:36:45.908323 1097 log.go:172] (0xc000679900) (3) Writing data frame\nI0526 21:36:45.908463 1097 log.go:172] (0xc000679900) (3) Writing data frame\nI0526 21:36:45.909637 1097 log.go:172] (0xc000b4e0b0) Data frame received for 5\nI0526 21:36:45.909676 1097 log.go:172] (0xc000b941e0) (5) Data frame handling\nI0526 21:36:45.909707 1097 log.go:172] (0xc000b941e0) (5) Data frame sent\nI0526 21:36:45.910344 1097 log.go:172] (0xc000b4e0b0) Data frame received for 5\nI0526 21:36:45.910366 1097 log.go:172] (0xc000b941e0) (5) Data frame handling\nI0526 21:36:45.910383 1097 log.go:172] (0xc000b941e0) (5) Data frame sent\nI0526 21:36:45.943533 1097 log.go:172] (0xc000b4e0b0) Data frame received for 7\nI0526 21:36:45.943550 1097 log.go:172] (0xc000b94320) (7) Data frame handling\nI0526 21:36:45.943575 1097 log.go:172] (0xc000b4e0b0) Data frame received for 5\nI0526 21:36:45.943600 1097 log.go:172] (0xc000b941e0) (5) Data frame handling\nI0526 21:36:45.944031 1097 log.go:172] (0xc000b4e0b0) Data frame received for 1\nI0526 21:36:45.944050 1097 log.go:172] (0xc000b940a0) (1) Data frame handling\nI0526 21:36:45.944069 1097 log.go:172] (0xc000b940a0) (1) Data frame sent\nI0526 21:36:45.944183 1097 log.go:172] (0xc000b4e0b0) (0xc000b940a0) Stream removed, broadcasting: 1\nI0526 21:36:45.944456 1097 log.go:172] (0xc000b4e0b0) (0xc000679900) Stream removed, broadcasting: 3\nI0526 21:36:45.944507 1097 log.go:172] (0xc000b4e0b0) (0xc000b940a0) Stream removed, broadcasting: 1\nI0526 21:36:45.944527 1097 log.go:172] (0xc000b4e0b0) (0xc000679900) Stream removed, broadcasting: 3\nI0526 21:36:45.944539 1097 log.go:172] (0xc000b4e0b0) (0xc000b941e0) Stream removed, broadcasting: 5\nI0526 21:36:45.944635 1097 log.go:172] (0xc000b4e0b0) Go away received\nI0526 21:36:45.944829 1097 log.go:172] (0xc000b4e0b0) (0xc000b94320) Stream removed, broadcasting: 7\n" May 26 21:36:45.972: INFO: stdout: "abcd1234stdin closed\njob.batch \"e2e-test-rm-busybox-job\" deleted\n" STEP: verifying the job e2e-test-rm-busybox-job was deleted [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:36:47.979: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1543" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run --rm job should create a job from an image, then delete the job [Conformance]","total":278,"completed":98,"skipped":1630,"failed":0} SSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:36:47.987: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 26 21:36:48.069: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:36:55.629: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-4548" for this suite. • [SLOW TEST:7.687 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartNever pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartNever pod [Conformance]","total":278,"completed":99,"skipped":1634,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:36:55.675: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service endpoint-test2 in namespace services-140 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-140 to expose endpoints map[] May 26 21:36:55.817: INFO: Get endpoints failed (49.247446ms elapsed, ignoring for 5s): endpoints "endpoint-test2" not found May 26 21:36:56.821: INFO: successfully validated that service endpoint-test2 in namespace services-140 exposes endpoints map[] (1.053589852s elapsed) STEP: Creating pod pod1 in namespace services-140 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-140 to expose endpoints map[pod1:[80]] May 26 21:36:59.982: INFO: successfully validated that service endpoint-test2 in namespace services-140 exposes endpoints map[pod1:[80]] (3.153773468s elapsed) STEP: Creating pod pod2 in namespace services-140 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-140 to expose endpoints map[pod1:[80] pod2:[80]] May 26 21:37:03.216: INFO: successfully validated that service endpoint-test2 in namespace services-140 exposes endpoints map[pod1:[80] pod2:[80]] (3.230584012s elapsed) STEP: Deleting pod pod1 in namespace services-140 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-140 to expose endpoints map[pod2:[80]] May 26 21:37:04.251: INFO: successfully validated that service endpoint-test2 in namespace services-140 exposes endpoints map[pod2:[80]] (1.031664461s elapsed) STEP: Deleting pod pod2 in namespace services-140 STEP: waiting up to 3m0s for service endpoint-test2 in namespace services-140 to expose endpoints map[] May 26 21:37:05.270: INFO: successfully validated that service endpoint-test2 in namespace services-140 exposes endpoints map[] (1.014781972s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:37:05.431: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-140" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:9.764 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve a basic endpoint from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve a basic endpoint from pods [Conformance]","total":278,"completed":100,"skipped":1679,"failed":0} SSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:37:05.440: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 26 21:37:05.555: INFO: Waiting up to 5m0s for pod "pod-22dd1c7e-e5cc-45d0-8927-a1bfe1e1311b" in namespace "emptydir-3568" to be "success or failure" May 26 21:37:05.586: INFO: Pod "pod-22dd1c7e-e5cc-45d0-8927-a1bfe1e1311b": Phase="Pending", Reason="", readiness=false. Elapsed: 31.070346ms May 26 21:37:07.596: INFO: Pod "pod-22dd1c7e-e5cc-45d0-8927-a1bfe1e1311b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.04134973s May 26 21:37:09.602: INFO: Pod "pod-22dd1c7e-e5cc-45d0-8927-a1bfe1e1311b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.046646241s STEP: Saw pod success May 26 21:37:09.602: INFO: Pod "pod-22dd1c7e-e5cc-45d0-8927-a1bfe1e1311b" satisfied condition "success or failure" May 26 21:37:09.605: INFO: Trying to get logs from node jerma-worker2 pod pod-22dd1c7e-e5cc-45d0-8927-a1bfe1e1311b container test-container: STEP: delete the pod May 26 21:37:09.693: INFO: Waiting for pod pod-22dd1c7e-e5cc-45d0-8927-a1bfe1e1311b to disappear May 26 21:37:09.702: INFO: Pod pod-22dd1c7e-e5cc-45d0-8927-a1bfe1e1311b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:37:09.702: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-3568" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":101,"skipped":1684,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:37:09.708: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod test-webserver-9a3b1912-0ae8-444f-8bb0-ac391098eadc in namespace container-probe-5012 May 26 21:37:13.790: INFO: Started pod test-webserver-9a3b1912-0ae8-444f-8bb0-ac391098eadc in namespace container-probe-5012 STEP: checking the pod's current state and verifying that restartCount is present May 26 21:37:13.793: INFO: Initial restart count of pod test-webserver-9a3b1912-0ae8-444f-8bb0-ac391098eadc is 0 STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:41:14.608: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-5012" for this suite. • [SLOW TEST:244.965 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should *not* be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":102,"skipped":1713,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:41:14.673: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:41:14.713: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with known and required properties May 26 21:41:17.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2554 create -f -' May 26 21:41:21.080: INFO: stderr: "" May 26 21:41:21.080: INFO: stdout: "e2e-test-crd-publish-openapi-7307-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 26 21:41:21.080: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2554 delete e2e-test-crd-publish-openapi-7307-crds test-foo' May 26 21:41:21.191: INFO: stderr: "" May 26 21:41:21.191: INFO: stdout: "e2e-test-crd-publish-openapi-7307-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" May 26 21:41:21.191: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2554 apply -f -' May 26 21:41:21.510: INFO: stderr: "" May 26 21:41:21.510: INFO: stdout: "e2e-test-crd-publish-openapi-7307-crd.crd-publish-openapi-test-foo.example.com/test-foo created\n" May 26 21:41:21.510: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2554 delete e2e-test-crd-publish-openapi-7307-crds test-foo' May 26 21:41:21.635: INFO: stderr: "" May 26 21:41:21.635: INFO: stdout: "e2e-test-crd-publish-openapi-7307-crd.crd-publish-openapi-test-foo.example.com \"test-foo\" deleted\n" STEP: client-side validation (kubectl create and apply) rejects request with unknown properties when disallowed by the schema May 26 21:41:21.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2554 create -f -' May 26 21:41:21.899: INFO: rc: 1 May 26 21:41:21.900: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2554 apply -f -' May 26 21:41:22.171: INFO: rc: 1 STEP: client-side validation (kubectl create and apply) rejects request without required properties May 26 21:41:22.171: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2554 create -f -' May 26 21:41:22.403: INFO: rc: 1 May 26 21:41:22.403: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-2554 apply -f -' May 26 21:41:22.681: INFO: rc: 1 STEP: kubectl explain works to explain CR properties May 26 21:41:22.681: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7307-crds' May 26 21:41:22.924: INFO: stderr: "" May 26 21:41:22.925: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7307-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nDESCRIPTION:\n Foo CRD for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Foo\n\n status\t\n Status of Foo\n\n" STEP: kubectl explain works to explain CR properties recursively May 26 21:41:22.925: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7307-crds.metadata' May 26 21:41:23.167: INFO: stderr: "" May 26 21:41:23.167: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7307-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: metadata \n\nDESCRIPTION:\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n ObjectMeta is metadata that all persisted resources must have, which\n includes all objects users must create.\n\nFIELDS:\n annotations\t\n Annotations is an unstructured key value map stored with a resource that\n may be set by external tools to store and retrieve arbitrary metadata. They\n are not queryable and should be preserved when modifying objects. More\n info: http://kubernetes.io/docs/user-guide/annotations\n\n clusterName\t\n The name of the cluster which the object belongs to. This is used to\n distinguish resources with same name and namespace in different clusters.\n This field is not set anywhere right now and apiserver is going to ignore\n it if set in create or update request.\n\n creationTimestamp\t\n CreationTimestamp is a timestamp representing the server time when this\n object was created. It is not guaranteed to be set in happens-before order\n across separate operations. Clients may not set this value. It is\n represented in RFC3339 form and is in UTC. Populated by the system.\n Read-only. Null for lists. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n deletionGracePeriodSeconds\t\n Number of seconds allowed for this object to gracefully terminate before it\n will be removed from the system. Only set when deletionTimestamp is also\n set. May only be shortened. Read-only.\n\n deletionTimestamp\t\n DeletionTimestamp is RFC 3339 date and time at which this resource will be\n deleted. This field is set by the server when a graceful deletion is\n requested by the user, and is not directly settable by a client. The\n resource is expected to be deleted (no longer visible from resource lists,\n and not reachable by name) after the time in this field, once the\n finalizers list is empty. As long as the finalizers list contains items,\n deletion is blocked. Once the deletionTimestamp is set, this value may not\n be unset or be set further into the future, although it may be shortened or\n the resource may be deleted prior to this time. For example, a user may\n request that a pod is deleted in 30 seconds. The Kubelet will react by\n sending a graceful termination signal to the containers in the pod. After\n that 30 seconds, the Kubelet will send a hard termination signal (SIGKILL)\n to the container and after cleanup, remove the pod from the API. In the\n presence of network partitions, this object may still exist after this\n timestamp, until an administrator or automated process can determine the\n resource is fully terminated. If not set, graceful deletion of the object\n has not been requested. Populated by the system when a graceful deletion is\n requested. Read-only. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n finalizers\t<[]string>\n Must be empty before the object is deleted from the registry. Each entry is\n an identifier for the responsible component that will remove the entry from\n the list. If the deletionTimestamp of the object is non-nil, entries in\n this list can only be removed. Finalizers may be processed and removed in\n any order. Order is NOT enforced because it introduces significant risk of\n stuck finalizers. finalizers is a shared field, any actor with permission\n can reorder it. If the finalizer list is processed in order, then this can\n lead to a situation in which the component responsible for the first\n finalizer in the list is waiting for a signal (field value, external\n system, or other) produced by a component responsible for a finalizer later\n in the list, resulting in a deadlock. Without enforced ordering finalizers\n are free to order amongst themselves and are not vulnerable to ordering\n changes in the list.\n\n generateName\t\n GenerateName is an optional prefix, used by the server, to generate a\n unique name ONLY IF the Name field has not been provided. If this field is\n used, the name returned to the client will be different than the name\n passed. This value will also be combined with a unique suffix. The provided\n value has the same validation rules as the Name field, and may be truncated\n by the length of the suffix required to make the value unique on the\n server. If this field is specified and the generated name exists, the\n server will NOT return a 409 - instead, it will either return 201 Created\n or 500 with Reason ServerTimeout indicating a unique name could not be\n found in the time allotted, and the client should retry (optionally after\n the time indicated in the Retry-After header). Applied only if Name is not\n specified. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#idempotency\n\n generation\t\n A sequence number representing a specific generation of the desired state.\n Populated by the system. Read-only.\n\n labels\t\n Map of string keys and values that can be used to organize and categorize\n (scope and select) objects. May match selectors of replication controllers\n and services. More info: http://kubernetes.io/docs/user-guide/labels\n\n managedFields\t<[]Object>\n ManagedFields maps workflow-id and version to the set of fields that are\n managed by that workflow. This is mostly for internal housekeeping, and\n users typically shouldn't need to set or understand this field. A workflow\n can be the user's name, a controller's name, or the name of a specific\n apply path like \"ci-cd\". The set of fields is always in the version that\n the workflow used when modifying the object.\n\n name\t\n Name must be unique within a namespace. Is required when creating\n resources, although some resources may allow a client to request the\n generation of an appropriate name automatically. Name is primarily intended\n for creation idempotence and configuration definition. Cannot be updated.\n More info: http://kubernetes.io/docs/user-guide/identifiers#names\n\n namespace\t\n Namespace defines the space within each name must be unique. An empty\n namespace is equivalent to the \"default\" namespace, but \"default\" is the\n canonical representation. Not all objects are required to be scoped to a\n namespace - the value of this field for those objects will be empty. Must\n be a DNS_LABEL. Cannot be updated. More info:\n http://kubernetes.io/docs/user-guide/namespaces\n\n ownerReferences\t<[]Object>\n List of objects depended by this object. If ALL objects in the list have\n been deleted, this object will be garbage collected. If this object is\n managed by a controller, then an entry in this list will point to this\n controller, with the controller field set to true. There cannot be more\n than one managing controller.\n\n resourceVersion\t\n An opaque value that represents the internal version of this object that\n can be used by clients to determine when objects have changed. May be used\n for optimistic concurrency, change detection, and the watch operation on a\n resource or set of resources. Clients must treat these values as opaque and\n passed unmodified back to the server. They may only be valid for a\n particular resource or set of resources. Populated by the system.\n Read-only. Value must be treated as opaque by clients and . More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#concurrency-control-and-consistency\n\n selfLink\t\n SelfLink is a URL representing this object. Populated by the system.\n Read-only. DEPRECATED Kubernetes will stop propagating this field in 1.20\n release and the field is planned to be removed in 1.21 release.\n\n uid\t\n UID is the unique in time and space value for this object. It is typically\n generated by the server on successful creation of a resource and is not\n allowed to change on PUT operations. Populated by the system. Read-only.\n More info: http://kubernetes.io/docs/user-guide/identifiers#uids\n\n" May 26 21:41:23.168: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7307-crds.spec' May 26 21:41:23.451: INFO: stderr: "" May 26 21:41:23.451: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7307-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: spec \n\nDESCRIPTION:\n Specification of Foo\n\nFIELDS:\n bars\t<[]Object>\n List of Bars and their specs.\n\n" May 26 21:41:23.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7307-crds.spec.bars' May 26 21:41:23.699: INFO: stderr: "" May 26 21:41:23.699: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-7307-crd\nVERSION: crd-publish-openapi-test-foo.example.com/v1\n\nRESOURCE: bars <[]Object>\n\nDESCRIPTION:\n List of Bars and their specs.\n\nFIELDS:\n age\t\n Age of Bar.\n\n bazs\t<[]string>\n List of Bazs.\n\n name\t -required-\n Name of Bar.\n\n" STEP: kubectl explain works to return error when explain is called on property that doesn't exist May 26 21:41:23.699: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-7307-crds.spec.bars2' May 26 21:41:23.945: INFO: rc: 1 [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:41:26.818: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-2554" for this suite. • [SLOW TEST:12.153 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD with validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD with validation schema [Conformance]","total":278,"completed":103,"skipped":1718,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:41:26.826: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-downwardapi-strd STEP: Creating a pod to test atomic-volume-subpath May 26 21:41:26.926: INFO: Waiting up to 5m0s for pod "pod-subpath-test-downwardapi-strd" in namespace "subpath-4396" to be "success or failure" May 26 21:41:26.970: INFO: Pod "pod-subpath-test-downwardapi-strd": Phase="Pending", Reason="", readiness=false. Elapsed: 44.87509ms May 26 21:41:28.975: INFO: Pod "pod-subpath-test-downwardapi-strd": Phase="Pending", Reason="", readiness=false. Elapsed: 2.048940513s May 26 21:41:30.979: INFO: Pod "pod-subpath-test-downwardapi-strd": Phase="Running", Reason="", readiness=true. Elapsed: 4.053430185s May 26 21:41:32.984: INFO: Pod "pod-subpath-test-downwardapi-strd": Phase="Running", Reason="", readiness=true. Elapsed: 6.058076512s May 26 21:41:34.988: INFO: Pod "pod-subpath-test-downwardapi-strd": Phase="Running", Reason="", readiness=true. Elapsed: 8.062549429s May 26 21:41:36.992: INFO: Pod "pod-subpath-test-downwardapi-strd": Phase="Running", Reason="", readiness=true. Elapsed: 10.066726576s May 26 21:41:38.996: INFO: Pod "pod-subpath-test-downwardapi-strd": Phase="Running", Reason="", readiness=true. Elapsed: 12.070801212s May 26 21:41:41.001: INFO: Pod "pod-subpath-test-downwardapi-strd": Phase="Running", Reason="", readiness=true. Elapsed: 14.075326241s May 26 21:41:43.006: INFO: Pod "pod-subpath-test-downwardapi-strd": Phase="Running", Reason="", readiness=true. Elapsed: 16.080023117s May 26 21:41:45.010: INFO: Pod "pod-subpath-test-downwardapi-strd": Phase="Running", Reason="", readiness=true. Elapsed: 18.084341241s May 26 21:41:47.014: INFO: Pod "pod-subpath-test-downwardapi-strd": Phase="Running", Reason="", readiness=true. Elapsed: 20.088494106s May 26 21:41:49.018: INFO: Pod "pod-subpath-test-downwardapi-strd": Phase="Running", Reason="", readiness=true. Elapsed: 22.092122409s May 26 21:41:51.022: INFO: Pod "pod-subpath-test-downwardapi-strd": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.096517465s STEP: Saw pod success May 26 21:41:51.022: INFO: Pod "pod-subpath-test-downwardapi-strd" satisfied condition "success or failure" May 26 21:41:51.026: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-downwardapi-strd container test-container-subpath-downwardapi-strd: STEP: delete the pod May 26 21:41:51.083: INFO: Waiting for pod pod-subpath-test-downwardapi-strd to disappear May 26 21:41:51.087: INFO: Pod pod-subpath-test-downwardapi-strd no longer exists STEP: Deleting pod pod-subpath-test-downwardapi-strd May 26 21:41:51.087: INFO: Deleting pod "pod-subpath-test-downwardapi-strd" in namespace "subpath-4396" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:41:51.089: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4396" for this suite. • [SLOW TEST:24.269 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with downward pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with downward pod [LinuxOnly] [Conformance]","total":278,"completed":104,"skipped":1736,"failed":0} SSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:41:51.096: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should be possible to delete [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:41:51.285: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-7381" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should be possible to delete [NodeConformance] [Conformance]","total":278,"completed":105,"skipped":1743,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:41:51.330: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable via the environment [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap configmap-180/configmap-test-080111d2-f51f-4d89-98fc-a79486147145 STEP: Creating a pod to test consume configMaps May 26 21:41:51.416: INFO: Waiting up to 5m0s for pod "pod-configmaps-9951838e-cd8c-4e22-8f23-857c71fbecaa" in namespace "configmap-180" to be "success or failure" May 26 21:41:51.438: INFO: Pod "pod-configmaps-9951838e-cd8c-4e22-8f23-857c71fbecaa": Phase="Pending", Reason="", readiness=false. Elapsed: 21.579432ms May 26 21:41:53.498: INFO: Pod "pod-configmaps-9951838e-cd8c-4e22-8f23-857c71fbecaa": Phase="Pending", Reason="", readiness=false. Elapsed: 2.081631705s May 26 21:41:55.502: INFO: Pod "pod-configmaps-9951838e-cd8c-4e22-8f23-857c71fbecaa": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.085962432s STEP: Saw pod success May 26 21:41:55.502: INFO: Pod "pod-configmaps-9951838e-cd8c-4e22-8f23-857c71fbecaa" satisfied condition "success or failure" May 26 21:41:55.505: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-9951838e-cd8c-4e22-8f23-857c71fbecaa container env-test: STEP: delete the pod May 26 21:41:55.739: INFO: Waiting for pod pod-configmaps-9951838e-cd8c-4e22-8f23-857c71fbecaa to disappear May 26 21:41:55.750: INFO: Pod pod-configmaps-9951838e-cd8c-4e22-8f23-857c71fbecaa no longer exists [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:41:55.750: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-180" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should be consumable via the environment [NodeConformance] [Conformance]","total":278,"completed":106,"skipped":1754,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:41:55.757: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 26 21:41:55.837: INFO: Waiting up to 5m0s for pod "pod-d1ad76f2-72ed-4a18-9109-0285dd535700" in namespace "emptydir-938" to be "success or failure" May 26 21:41:55.857: INFO: Pod "pod-d1ad76f2-72ed-4a18-9109-0285dd535700": Phase="Pending", Reason="", readiness=false. Elapsed: 20.207225ms May 26 21:41:57.862: INFO: Pod "pod-d1ad76f2-72ed-4a18-9109-0285dd535700": Phase="Pending", Reason="", readiness=false. Elapsed: 2.024742805s May 26 21:41:59.866: INFO: Pod "pod-d1ad76f2-72ed-4a18-9109-0285dd535700": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.029390048s STEP: Saw pod success May 26 21:41:59.866: INFO: Pod "pod-d1ad76f2-72ed-4a18-9109-0285dd535700" satisfied condition "success or failure" May 26 21:41:59.870: INFO: Trying to get logs from node jerma-worker2 pod pod-d1ad76f2-72ed-4a18-9109-0285dd535700 container test-container: STEP: delete the pod May 26 21:41:59.909: INFO: Waiting for pod pod-d1ad76f2-72ed-4a18-9109-0285dd535700 to disappear May 26 21:41:59.924: INFO: Pod pod-d1ad76f2-72ed-4a18-9109-0285dd535700 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:41:59.924: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-938" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":107,"skipped":1777,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:41:59.932: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1357 STEP: creating an pod May 26 21:42:00.023: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run logs-generator --generator=run-pod/v1 --image=gcr.io/kubernetes-e2e-test-images/agnhost:2.8 --namespace=kubectl-8918 -- logs-generator --log-lines-total 100 --run-duration 20s' May 26 21:42:00.129: INFO: stderr: "" May 26 21:42:00.129: INFO: stdout: "pod/logs-generator created\n" [It] should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Waiting for log generator to start. May 26 21:42:00.129: INFO: Waiting up to 5m0s for 1 pods to be running and ready, or succeeded: [logs-generator] May 26 21:42:00.129: INFO: Waiting up to 5m0s for pod "logs-generator" in namespace "kubectl-8918" to be "running and ready, or succeeded" May 26 21:42:00.133: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 3.920696ms May 26 21:42:02.137: INFO: Pod "logs-generator": Phase="Pending", Reason="", readiness=false. Elapsed: 2.00785422s May 26 21:42:04.142: INFO: Pod "logs-generator": Phase="Running", Reason="", readiness=true. Elapsed: 4.012705313s May 26 21:42:04.142: INFO: Pod "logs-generator" satisfied condition "running and ready, or succeeded" May 26 21:42:04.142: INFO: Wanted all 1 pods to be running and ready, or succeeded. Result: true. Pods: [logs-generator] STEP: checking for a matching strings May 26 21:42:04.142: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8918' May 26 21:42:04.260: INFO: stderr: "" May 26 21:42:04.260: INFO: stdout: "I0526 21:42:02.411290 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/kbn5 443\nI0526 21:42:02.611620 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/pnn 345\nI0526 21:42:02.811553 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/9n7d 449\nI0526 21:42:03.011490 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/dr9 333\nI0526 21:42:03.211471 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/mj7 456\nI0526 21:42:03.411520 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/9wl 522\nI0526 21:42:03.611519 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/pff 481\nI0526 21:42:03.811504 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/v7s 506\nI0526 21:42:04.011691 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/k6t8 324\nI0526 21:42:04.211571 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/5wbn 328\n" STEP: limiting log lines May 26 21:42:04.260: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8918 --tail=1' May 26 21:42:04.370: INFO: stderr: "" May 26 21:42:04.370: INFO: stdout: "I0526 21:42:04.211571 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/5wbn 328\n" May 26 21:42:04.370: INFO: got output "I0526 21:42:04.211571 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/5wbn 328\n" STEP: limiting log bytes May 26 21:42:04.370: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8918 --limit-bytes=1' May 26 21:42:04.487: INFO: stderr: "" May 26 21:42:04.487: INFO: stdout: "I" May 26 21:42:04.487: INFO: got output "I" STEP: exposing timestamps May 26 21:42:04.487: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8918 --tail=1 --timestamps' May 26 21:42:04.593: INFO: stderr: "" May 26 21:42:04.594: INFO: stdout: "2020-05-26T21:42:04.411772459Z I0526 21:42:04.411584 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/qgv 454\n" May 26 21:42:04.594: INFO: got output "2020-05-26T21:42:04.411772459Z I0526 21:42:04.411584 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/qgv 454\n" STEP: restricting to a time range May 26 21:42:07.094: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8918 --since=1s' May 26 21:42:07.222: INFO: stderr: "" May 26 21:42:07.222: INFO: stdout: "I0526 21:42:06.411527 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/pnx 295\nI0526 21:42:06.611528 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/wxmn 371\nI0526 21:42:06.811590 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/kb4 382\nI0526 21:42:07.011515 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/ccl 560\nI0526 21:42:07.211555 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/r47g 352\n" May 26 21:42:07.222: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs logs-generator logs-generator --namespace=kubectl-8918 --since=24h' May 26 21:42:07.327: INFO: stderr: "" May 26 21:42:07.327: INFO: stdout: "I0526 21:42:02.411290 1 logs_generator.go:76] 0 PUT /api/v1/namespaces/ns/pods/kbn5 443\nI0526 21:42:02.611620 1 logs_generator.go:76] 1 PUT /api/v1/namespaces/default/pods/pnn 345\nI0526 21:42:02.811553 1 logs_generator.go:76] 2 PUT /api/v1/namespaces/default/pods/9n7d 449\nI0526 21:42:03.011490 1 logs_generator.go:76] 3 PUT /api/v1/namespaces/kube-system/pods/dr9 333\nI0526 21:42:03.211471 1 logs_generator.go:76] 4 GET /api/v1/namespaces/ns/pods/mj7 456\nI0526 21:42:03.411520 1 logs_generator.go:76] 5 POST /api/v1/namespaces/ns/pods/9wl 522\nI0526 21:42:03.611519 1 logs_generator.go:76] 6 POST /api/v1/namespaces/default/pods/pff 481\nI0526 21:42:03.811504 1 logs_generator.go:76] 7 GET /api/v1/namespaces/ns/pods/v7s 506\nI0526 21:42:04.011691 1 logs_generator.go:76] 8 GET /api/v1/namespaces/default/pods/k6t8 324\nI0526 21:42:04.211571 1 logs_generator.go:76] 9 PUT /api/v1/namespaces/default/pods/5wbn 328\nI0526 21:42:04.411584 1 logs_generator.go:76] 10 POST /api/v1/namespaces/kube-system/pods/qgv 454\nI0526 21:42:04.611498 1 logs_generator.go:76] 11 PUT /api/v1/namespaces/default/pods/mg7 401\nI0526 21:42:04.811557 1 logs_generator.go:76] 12 POST /api/v1/namespaces/kube-system/pods/qpv 526\nI0526 21:42:05.011519 1 logs_generator.go:76] 13 POST /api/v1/namespaces/kube-system/pods/jtlc 320\nI0526 21:42:05.211473 1 logs_generator.go:76] 14 GET /api/v1/namespaces/ns/pods/gbf 547\nI0526 21:42:05.411448 1 logs_generator.go:76] 15 POST /api/v1/namespaces/ns/pods/2bqh 339\nI0526 21:42:05.611463 1 logs_generator.go:76] 16 GET /api/v1/namespaces/kube-system/pods/8t8 221\nI0526 21:42:05.811512 1 logs_generator.go:76] 17 PUT /api/v1/namespaces/ns/pods/s8fp 580\nI0526 21:42:06.011496 1 logs_generator.go:76] 18 PUT /api/v1/namespaces/kube-system/pods/ppx 273\nI0526 21:42:06.211484 1 logs_generator.go:76] 19 POST /api/v1/namespaces/default/pods/5bnz 379\nI0526 21:42:06.411527 1 logs_generator.go:76] 20 POST /api/v1/namespaces/ns/pods/pnx 295\nI0526 21:42:06.611528 1 logs_generator.go:76] 21 POST /api/v1/namespaces/default/pods/wxmn 371\nI0526 21:42:06.811590 1 logs_generator.go:76] 22 PUT /api/v1/namespaces/default/pods/kb4 382\nI0526 21:42:07.011515 1 logs_generator.go:76] 23 PUT /api/v1/namespaces/default/pods/ccl 560\nI0526 21:42:07.211555 1 logs_generator.go:76] 24 GET /api/v1/namespaces/default/pods/r47g 352\n" [AfterEach] Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1363 May 26 21:42:07.327: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pod logs-generator --namespace=kubectl-8918' May 26 21:42:09.492: INFO: stderr: "" May 26 21:42:09.492: INFO: stdout: "pod \"logs-generator\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:42:09.492: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8918" for this suite. • [SLOW TEST:9.595 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl logs /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1353 should be able to retrieve and filter logs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl logs should be able to retrieve and filter logs [Conformance]","total":278,"completed":108,"skipped":1789,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:42:09.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 26 21:42:13.640: INFO: Expected: &{} to match Container's Termination Message: -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:42:13.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-821" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] as empty when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":109,"skipped":1795,"failed":0} ------------------------------ [sig-auth] ServiceAccounts should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:42:13.755: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svcaccounts STEP: Waiting for a default service account to be provisioned in namespace [It] should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: getting the auto-created API token STEP: reading a file in the container May 26 21:42:18.390: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2896 pod-service-account-1187b03c-d90c-473e-83cb-dd144ca61819 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/token' STEP: reading a file in the container May 26 21:42:18.618: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2896 pod-service-account-1187b03c-d90c-473e-83cb-dd144ca61819 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/ca.crt' STEP: reading a file in the container May 26 21:42:18.819: INFO: Running '/usr/local/bin/kubectl exec --namespace=svcaccounts-2896 pod-service-account-1187b03c-d90c-473e-83cb-dd144ca61819 -c=test -- cat /var/run/secrets/kubernetes.io/serviceaccount/namespace' [AfterEach] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:42:19.056: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svcaccounts-2896" for this suite. • [SLOW TEST:5.349 seconds] [sig-auth] ServiceAccounts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/auth/framework.go:23 should mount an API token into pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-auth] ServiceAccounts should mount an API token into pods [Conformance]","total":278,"completed":110,"skipped":1795,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:42:19.105: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/node/pods.go:46 [It] should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up selector STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes May 26 21:42:23.324: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0' STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 26 21:42:28.444: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed [AfterEach] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:42:28.447: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-5792" for this suite. • [SLOW TEST:9.350 seconds] [k8s.io] [sig-node] Pods Extended /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 [k8s.io] Delete Grace Period /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Pods Extended [k8s.io] Delete Grace Period should be submitted and removed [Conformance]","total":278,"completed":111,"skipped":1823,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:42:28.456: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] pod should support shared volumes between containers [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating Pod STEP: Waiting for the pod running STEP: Geting the pod STEP: Reading file content from the nginx-container May 26 21:42:32.975: INFO: ExecWithOptions {Command:[/bin/sh -c cat /usr/share/volumeshare/shareddata.txt] Namespace:emptydir-6112 PodName:pod-sharedvolume-95ae7eb4-13ff-4f00-829c-9e96a3154294 ContainerName:busybox-main-container Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:42:32.975: INFO: >>> kubeConfig: /root/.kube/config I0526 21:42:33.015178 6 log.go:172] (0xc005f35760) (0xc001a69720) Create stream I0526 21:42:33.015226 6 log.go:172] (0xc005f35760) (0xc001a69720) Stream added, broadcasting: 1 I0526 21:42:33.017557 6 log.go:172] (0xc005f35760) Reply frame received for 1 I0526 21:42:33.017600 6 log.go:172] (0xc005f35760) (0xc00274d680) Create stream I0526 21:42:33.017609 6 log.go:172] (0xc005f35760) (0xc00274d680) Stream added, broadcasting: 3 I0526 21:42:33.018609 6 log.go:172] (0xc005f35760) Reply frame received for 3 I0526 21:42:33.018640 6 log.go:172] (0xc005f35760) (0xc00274d720) Create stream I0526 21:42:33.018653 6 log.go:172] (0xc005f35760) (0xc00274d720) Stream added, broadcasting: 5 I0526 21:42:33.019821 6 log.go:172] (0xc005f35760) Reply frame received for 5 I0526 21:42:33.089569 6 log.go:172] (0xc005f35760) Data frame received for 3 I0526 21:42:33.089614 6 log.go:172] (0xc00274d680) (3) Data frame handling I0526 21:42:33.089633 6 log.go:172] (0xc00274d680) (3) Data frame sent I0526 21:42:33.089728 6 log.go:172] (0xc005f35760) Data frame received for 5 I0526 21:42:33.089760 6 log.go:172] (0xc00274d720) (5) Data frame handling I0526 21:42:33.089791 6 log.go:172] (0xc005f35760) Data frame received for 3 I0526 21:42:33.089806 6 log.go:172] (0xc00274d680) (3) Data frame handling I0526 21:42:33.091037 6 log.go:172] (0xc005f35760) Data frame received for 1 I0526 21:42:33.091085 6 log.go:172] (0xc001a69720) (1) Data frame handling I0526 21:42:33.091114 6 log.go:172] (0xc001a69720) (1) Data frame sent I0526 21:42:33.091282 6 log.go:172] (0xc005f35760) (0xc001a69720) Stream removed, broadcasting: 1 I0526 21:42:33.091345 6 log.go:172] (0xc005f35760) Go away received I0526 21:42:33.091393 6 log.go:172] (0xc005f35760) (0xc001a69720) Stream removed, broadcasting: 1 I0526 21:42:33.091418 6 log.go:172] (0xc005f35760) (0xc00274d680) Stream removed, broadcasting: 3 I0526 21:42:33.091433 6 log.go:172] (0xc005f35760) (0xc00274d720) Stream removed, broadcasting: 5 May 26 21:42:33.091: INFO: Exec stderr: "" [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:42:33.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-6112" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes pod should support shared volumes between containers [Conformance]","total":278,"completed":112,"skipped":1849,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:42:33.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 26 21:42:33.451: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4124' May 26 21:42:33.804: INFO: stderr: "" May 26 21:42:33.804: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 26 21:42:33.804: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4124' May 26 21:42:33.937: INFO: stderr: "" May 26 21:42:33.937: INFO: stdout: "update-demo-nautilus-kbdjg update-demo-nautilus-r8fj2 " May 26 21:42:33.937: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbdjg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4124' May 26 21:42:34.024: INFO: stderr: "" May 26 21:42:34.024: INFO: stdout: "" May 26 21:42:34.024: INFO: update-demo-nautilus-kbdjg is created but not running May 26 21:42:39.024: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-4124' May 26 21:42:39.137: INFO: stderr: "" May 26 21:42:39.137: INFO: stdout: "update-demo-nautilus-kbdjg update-demo-nautilus-r8fj2 " May 26 21:42:39.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbdjg -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4124' May 26 21:42:39.229: INFO: stderr: "" May 26 21:42:39.229: INFO: stdout: "true" May 26 21:42:39.229: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-kbdjg -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4124' May 26 21:42:39.314: INFO: stderr: "" May 26 21:42:39.314: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 21:42:39.314: INFO: validating pod update-demo-nautilus-kbdjg May 26 21:42:39.321: INFO: got data: { "image": "nautilus.jpg" } May 26 21:42:39.321: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 21:42:39.321: INFO: update-demo-nautilus-kbdjg is verified up and running May 26 21:42:39.321: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r8fj2 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-4124' May 26 21:42:39.410: INFO: stderr: "" May 26 21:42:39.410: INFO: stdout: "true" May 26 21:42:39.411: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-r8fj2 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-4124' May 26 21:42:39.513: INFO: stderr: "" May 26 21:42:39.513: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 21:42:39.513: INFO: validating pod update-demo-nautilus-r8fj2 May 26 21:42:39.530: INFO: got data: { "image": "nautilus.jpg" } May 26 21:42:39.531: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 21:42:39.531: INFO: update-demo-nautilus-r8fj2 is verified up and running STEP: using delete to clean up resources May 26 21:42:39.531: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4124' May 26 21:42:39.636: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 21:42:39.636: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 26 21:42:39.636: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4124' May 26 21:42:39.743: INFO: stderr: "No resources found in kubectl-4124 namespace.\n" May 26 21:42:39.743: INFO: stdout: "" May 26 21:42:39.743: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4124 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 26 21:42:39.842: INFO: stderr: "" May 26 21:42:39.842: INFO: stdout: "update-demo-nautilus-kbdjg\nupdate-demo-nautilus-r8fj2\n" May 26 21:42:40.342: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-4124' May 26 21:42:40.476: INFO: stderr: "No resources found in kubectl-4124 namespace.\n" May 26 21:42:40.476: INFO: stdout: "" May 26 21:42:40.476: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-4124 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 26 21:42:40.605: INFO: stderr: "" May 26 21:42:40.605: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:42:40.605: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4124" for this suite. • [SLOW TEST:7.473 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should create and stop a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should create and stop a replication controller [Conformance]","total":278,"completed":113,"skipped":1866,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:42:40.632: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node with explicit kubelet port using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:42:40.830: INFO: (0) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 8.000476ms) May 26 21:42:40.832: INFO: (1) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.635404ms) May 26 21:42:40.835: INFO: (2) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.687157ms) May 26 21:42:40.837: INFO: (3) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.141249ms) May 26 21:42:40.840: INFO: (4) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.246677ms) May 26 21:42:40.842: INFO: (5) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.129006ms) May 26 21:42:40.844: INFO: (6) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.065873ms) May 26 21:42:40.846: INFO: (7) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.216368ms) May 26 21:42:40.848: INFO: (8) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.336115ms) May 26 21:42:40.851: INFO: (9) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.458147ms) May 26 21:42:40.853: INFO: (10) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.345959ms) May 26 21:42:40.856: INFO: (11) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.525374ms) May 26 21:42:41.099: INFO: (12) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 243.182209ms) May 26 21:42:41.103: INFO: (13) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.363935ms) May 26 21:42:41.107: INFO: (14) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.591974ms) May 26 21:42:41.112: INFO: (15) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 4.406555ms) May 26 21:42:41.115: INFO: (16) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 3.740976ms) May 26 21:42:41.118: INFO: (17) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.742164ms) May 26 21:42:41.120: INFO: (18) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.333584ms) May 26 21:42:41.123: INFO: (19) /api/v1/nodes/jerma-worker2:10250/proxy/logs/:
containers/
pods/
(200; 2.5237ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:42:41.123: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-3416" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node with explicit kubelet port using proxy subresource [Conformance]","total":278,"completed":114,"skipped":1895,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:42:41.128: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename tables STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/table_conversion.go:46 [It] should return a 406 for a backend which does not implement metadata [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [sig-api-machinery] Servers with support for Table transformation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:42:41.360: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "tables-525" for this suite. •{"msg":"PASSED [sig-api-machinery] Servers with support for Table transformation should return a 406 for a backend which does not implement metadata [Conformance]","total":278,"completed":115,"skipped":1909,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:42:41.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name projected-secret-test-9155d300-dad1-4d88-bb04-b82f9114e041 STEP: Creating a pod to test consume secrets May 26 21:42:41.484: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-3c5a47c2-4d8c-4eed-92c3-2e4c7435b041" in namespace "projected-7523" to be "success or failure" May 26 21:42:41.500: INFO: Pod "pod-projected-secrets-3c5a47c2-4d8c-4eed-92c3-2e4c7435b041": Phase="Pending", Reason="", readiness=false. Elapsed: 15.978389ms May 26 21:42:43.708: INFO: Pod "pod-projected-secrets-3c5a47c2-4d8c-4eed-92c3-2e4c7435b041": Phase="Pending", Reason="", readiness=false. Elapsed: 2.224598961s May 26 21:42:45.762: INFO: Pod "pod-projected-secrets-3c5a47c2-4d8c-4eed-92c3-2e4c7435b041": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.278506384s STEP: Saw pod success May 26 21:42:45.762: INFO: Pod "pod-projected-secrets-3c5a47c2-4d8c-4eed-92c3-2e4c7435b041" satisfied condition "success or failure" May 26 21:42:45.787: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-3c5a47c2-4d8c-4eed-92c3-2e4c7435b041 container secret-volume-test: STEP: delete the pod May 26 21:42:45.949: INFO: Waiting for pod pod-projected-secrets-3c5a47c2-4d8c-4eed-92c3-2e4c7435b041 to disappear May 26 21:42:45.994: INFO: Pod pod-projected-secrets-3c5a47c2-4d8c-4eed-92c3-2e4c7435b041 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:42:45.994: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7523" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":116,"skipped":1920,"failed":0} SS ------------------------------ [k8s.io] Pods should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:42:46.010: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should get a host IP [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating pod May 26 21:42:50.191: INFO: Pod pod-hostip-59382c80-1b3b-4f47-9b1a-8cfee28889ac has hostIP: 172.17.0.8 [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:42:50.191: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6482" for this suite. •{"msg":"PASSED [k8s.io] Pods should get a host IP [NodeConformance] [Conformance]","total":278,"completed":117,"skipped":1922,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:42:50.200: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-c687f05b-06ea-43af-bfe9-98f09c6e002b STEP: Creating a pod to test consume secrets May 26 21:42:50.362: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-ab119ade-8dd3-4ed9-9b3e-b38b96b343d8" in namespace "projected-5226" to be "success or failure" May 26 21:42:50.381: INFO: Pod "pod-projected-secrets-ab119ade-8dd3-4ed9-9b3e-b38b96b343d8": Phase="Pending", Reason="", readiness=false. Elapsed: 19.45178ms May 26 21:42:52.386: INFO: Pod "pod-projected-secrets-ab119ade-8dd3-4ed9-9b3e-b38b96b343d8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.023724695s May 26 21:42:54.390: INFO: Pod "pod-projected-secrets-ab119ade-8dd3-4ed9-9b3e-b38b96b343d8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.028316399s STEP: Saw pod success May 26 21:42:54.390: INFO: Pod "pod-projected-secrets-ab119ade-8dd3-4ed9-9b3e-b38b96b343d8" satisfied condition "success or failure" May 26 21:42:54.393: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-ab119ade-8dd3-4ed9-9b3e-b38b96b343d8 container projected-secret-volume-test: STEP: delete the pod May 26 21:42:54.439: INFO: Waiting for pod pod-projected-secrets-ab119ade-8dd3-4ed9-9b3e-b38b96b343d8 to disappear May 26 21:42:54.481: INFO: Pod pod-projected-secrets-ab119ade-8dd3-4ed9-9b3e-b38b96b343d8 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:42:54.481: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-5226" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":118,"skipped":1985,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:42:54.489: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 21:42:55.159: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 21:42:57.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126175, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126175, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126175, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126175, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 21:42:59.413: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126175, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126175, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126175, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126175, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 21:43:02.211: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating pod webhook via the AdmissionRegistration API STEP: create a pod that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:43:02.331: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-4289" for this suite. STEP: Destroying namespace "webhook-4289-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.085 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate pod and apply defaults after mutation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate pod and apply defaults after mutation [Conformance]","total":278,"completed":119,"skipped":1991,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:43:02.574: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 26 21:43:07.226: INFO: Successfully updated pod "annotationupdatea76b57b4-5fb1-4164-bad5-a952fb887bdd" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:43:09.244: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4325" for this suite. • [SLOW TEST:6.678 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update annotations on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update annotations on modification [NodeConformance] [Conformance]","total":278,"completed":120,"skipped":2009,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:43:09.252: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-3625 STEP: creating a selector STEP: Creating the service pods in kubernetes May 26 21:43:09.309: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 26 21:43:33.530: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.1.105:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3625 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:43:33.530: INFO: >>> kubeConfig: /root/.kube/config I0526 21:43:33.559306 6 log.go:172] (0xc0021f1ce0) (0xc001d4c820) Create stream I0526 21:43:33.559332 6 log.go:172] (0xc0021f1ce0) (0xc001d4c820) Stream added, broadcasting: 1 I0526 21:43:33.561078 6 log.go:172] (0xc0021f1ce0) Reply frame received for 1 I0526 21:43:33.561427 6 log.go:172] (0xc0021f1ce0) (0xc00274c0a0) Create stream I0526 21:43:33.561469 6 log.go:172] (0xc0021f1ce0) (0xc00274c0a0) Stream added, broadcasting: 3 I0526 21:43:33.563011 6 log.go:172] (0xc0021f1ce0) Reply frame received for 3 I0526 21:43:33.563061 6 log.go:172] (0xc0021f1ce0) (0xc001d4c8c0) Create stream I0526 21:43:33.563077 6 log.go:172] (0xc0021f1ce0) (0xc001d4c8c0) Stream added, broadcasting: 5 I0526 21:43:33.564213 6 log.go:172] (0xc0021f1ce0) Reply frame received for 5 I0526 21:43:33.672044 6 log.go:172] (0xc0021f1ce0) Data frame received for 3 I0526 21:43:33.672091 6 log.go:172] (0xc00274c0a0) (3) Data frame handling I0526 21:43:33.672138 6 log.go:172] (0xc00274c0a0) (3) Data frame sent I0526 21:43:33.672231 6 log.go:172] (0xc0021f1ce0) Data frame received for 3 I0526 21:43:33.672271 6 log.go:172] (0xc00274c0a0) (3) Data frame handling I0526 21:43:33.672312 6 log.go:172] (0xc0021f1ce0) Data frame received for 5 I0526 21:43:33.672334 6 log.go:172] (0xc001d4c8c0) (5) Data frame handling I0526 21:43:33.673825 6 log.go:172] (0xc0021f1ce0) Data frame received for 1 I0526 21:43:33.673860 6 log.go:172] (0xc001d4c820) (1) Data frame handling I0526 21:43:33.673880 6 log.go:172] (0xc001d4c820) (1) Data frame sent I0526 21:43:33.673902 6 log.go:172] (0xc0021f1ce0) (0xc001d4c820) Stream removed, broadcasting: 1 I0526 21:43:33.673918 6 log.go:172] (0xc0021f1ce0) Go away received I0526 21:43:33.674050 6 log.go:172] (0xc0021f1ce0) (0xc001d4c820) Stream removed, broadcasting: 1 I0526 21:43:33.674073 6 log.go:172] (0xc0021f1ce0) (0xc00274c0a0) Stream removed, broadcasting: 3 I0526 21:43:33.674087 6 log.go:172] (0xc0021f1ce0) (0xc001d4c8c0) Stream removed, broadcasting: 5 May 26 21:43:33.674: INFO: Found all expected endpoints: [netserver-0] May 26 21:43:33.677: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s --max-time 15 --connect-timeout 1 http://10.244.2.153:8080/hostName | grep -v '^\s*$'] Namespace:pod-network-test-3625 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:43:33.677: INFO: >>> kubeConfig: /root/.kube/config I0526 21:43:33.710612 6 log.go:172] (0xc0023ac420) (0xc00274c820) Create stream I0526 21:43:33.710644 6 log.go:172] (0xc0023ac420) (0xc00274c820) Stream added, broadcasting: 1 I0526 21:43:33.712191 6 log.go:172] (0xc0023ac420) Reply frame received for 1 I0526 21:43:33.712220 6 log.go:172] (0xc0023ac420) (0xc001e2fb80) Create stream I0526 21:43:33.712231 6 log.go:172] (0xc0023ac420) (0xc001e2fb80) Stream added, broadcasting: 3 I0526 21:43:33.712910 6 log.go:172] (0xc0023ac420) Reply frame received for 3 I0526 21:43:33.712950 6 log.go:172] (0xc0023ac420) (0xc0016ef680) Create stream I0526 21:43:33.712960 6 log.go:172] (0xc0023ac420) (0xc0016ef680) Stream added, broadcasting: 5 I0526 21:43:33.713676 6 log.go:172] (0xc0023ac420) Reply frame received for 5 I0526 21:43:33.783159 6 log.go:172] (0xc0023ac420) Data frame received for 5 I0526 21:43:33.783187 6 log.go:172] (0xc0016ef680) (5) Data frame handling I0526 21:43:33.783207 6 log.go:172] (0xc0023ac420) Data frame received for 3 I0526 21:43:33.783224 6 log.go:172] (0xc001e2fb80) (3) Data frame handling I0526 21:43:33.783234 6 log.go:172] (0xc001e2fb80) (3) Data frame sent I0526 21:43:33.783331 6 log.go:172] (0xc0023ac420) Data frame received for 3 I0526 21:43:33.783369 6 log.go:172] (0xc001e2fb80) (3) Data frame handling I0526 21:43:33.784419 6 log.go:172] (0xc0023ac420) Data frame received for 1 I0526 21:43:33.784435 6 log.go:172] (0xc00274c820) (1) Data frame handling I0526 21:43:33.784443 6 log.go:172] (0xc00274c820) (1) Data frame sent I0526 21:43:33.784455 6 log.go:172] (0xc0023ac420) (0xc00274c820) Stream removed, broadcasting: 1 I0526 21:43:33.784484 6 log.go:172] (0xc0023ac420) Go away received I0526 21:43:33.784547 6 log.go:172] (0xc0023ac420) (0xc00274c820) Stream removed, broadcasting: 1 I0526 21:43:33.784562 6 log.go:172] (0xc0023ac420) (0xc001e2fb80) Stream removed, broadcasting: 3 I0526 21:43:33.784568 6 log.go:172] (0xc0023ac420) (0xc0016ef680) Stream removed, broadcasting: 5 May 26 21:43:33.784: INFO: Found all expected endpoints: [netserver-1] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:43:33.784: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-3625" for this suite. • [SLOW TEST:24.539 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for node-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":121,"skipped":2020,"failed":0} SSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:43:33.792: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [BeforeEach] when scheduling a busybox command that always fails in a pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:81 [It] should have an terminated reason [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:43:37.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-5609" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command that always fails in a pod should have an terminated reason [NodeConformance] [Conformance]","total":278,"completed":122,"skipped":2026,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:43:37.919: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test override arguments May 26 21:43:38.093: INFO: Waiting up to 5m0s for pod "client-containers-5fe4d8cf-c7b9-42d5-a250-fb4a9b410128" in namespace "containers-6160" to be "success or failure" May 26 21:43:38.111: INFO: Pod "client-containers-5fe4d8cf-c7b9-42d5-a250-fb4a9b410128": Phase="Pending", Reason="", readiness=false. Elapsed: 18.22134ms May 26 21:43:40.176: INFO: Pod "client-containers-5fe4d8cf-c7b9-42d5-a250-fb4a9b410128": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082987963s May 26 21:43:42.183: INFO: Pod "client-containers-5fe4d8cf-c7b9-42d5-a250-fb4a9b410128": Phase="Running", Reason="", readiness=true. Elapsed: 4.089317219s May 26 21:43:44.186: INFO: Pod "client-containers-5fe4d8cf-c7b9-42d5-a250-fb4a9b410128": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.093040593s STEP: Saw pod success May 26 21:43:44.186: INFO: Pod "client-containers-5fe4d8cf-c7b9-42d5-a250-fb4a9b410128" satisfied condition "success or failure" May 26 21:43:44.189: INFO: Trying to get logs from node jerma-worker pod client-containers-5fe4d8cf-c7b9-42d5-a250-fb4a9b410128 container test-container: STEP: delete the pod May 26 21:43:44.214: INFO: Waiting for pod client-containers-5fe4d8cf-c7b9-42d5-a250-fb4a9b410128 to disappear May 26 21:43:44.235: INFO: Pod client-containers-5fe4d8cf-c7b9-42d5-a250-fb4a9b410128 no longer exists [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:43:44.235: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-6160" for this suite. • [SLOW TEST:6.338 seconds] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Docker Containers should be able to override the image's default arguments (docker cmd) [NodeConformance] [Conformance]","total":278,"completed":123,"skipped":2071,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:43:44.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:43:51.309: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-8902" for this suite. • [SLOW TEST:7.061 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and ensure its status is promptly calculated. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and ensure its status is promptly calculated. [Conformance]","total":278,"completed":124,"skipped":2093,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:43:51.319: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-1180 [It] should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating statefulset ss in namespace statefulset-1180 May 26 21:43:51.418: INFO: Found 0 stateful pods, waiting for 1 May 26 21:44:01.423: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: getting scale subresource STEP: updating a scale subresource STEP: verifying the statefulset Spec.Replicas was modified [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 26 21:44:01.447: INFO: Deleting all statefulset in ns statefulset-1180 May 26 21:44:01.451: INFO: Scaling statefulset ss to 0 May 26 21:44:21.499: INFO: Waiting for statefulset status.replicas updated to 0 May 26 21:44:21.502: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:44:21.516: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-1180" for this suite. • [SLOW TEST:30.204 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have a working scale subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should have a working scale subresource [Conformance]","total":278,"completed":125,"skipped":2099,"failed":0} SSSS ------------------------------ [sig-network] Service endpoints latency should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:44:21.523: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename svc-latency STEP: Waiting for a default service account to be provisioned in namespace [It] should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:44:21.569: INFO: >>> kubeConfig: /root/.kube/config STEP: creating replication controller svc-latency-rc in namespace svc-latency-2467 I0526 21:44:21.595603 6 runners.go:189] Created replication controller with name: svc-latency-rc, namespace: svc-latency-2467, replica count: 1 I0526 21:44:22.646144 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 21:44:23.646378 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 21:44:24.646684 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 0 running, 1 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 21:44:25.646933 6 runners.go:189] svc-latency-rc Pods: 1 out of 1 created, 1 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 21:44:25.783: INFO: Created: latency-svc-xjd5v May 26 21:44:25.834: INFO: Got endpoints: latency-svc-xjd5v [87.066649ms] May 26 21:44:25.872: INFO: Created: latency-svc-pk5tb May 26 21:44:25.915: INFO: Got endpoints: latency-svc-pk5tb [81.158828ms] May 26 21:44:26.842: INFO: Created: latency-svc-jhzkm May 26 21:44:26.891: INFO: Got endpoints: latency-svc-jhzkm [1.056149399s] May 26 21:44:26.927: INFO: Created: latency-svc-82ld4 May 26 21:44:26.998: INFO: Got endpoints: latency-svc-82ld4 [1.164564818s] May 26 21:44:27.014: INFO: Created: latency-svc-pmqfd May 26 21:44:27.041: INFO: Created: latency-svc-hqbgl May 26 21:44:27.041: INFO: Got endpoints: latency-svc-pmqfd [1.207390159s] May 26 21:44:27.056: INFO: Got endpoints: latency-svc-hqbgl [1.222332848s] May 26 21:44:27.142: INFO: Created: latency-svc-9b7cw May 26 21:44:27.171: INFO: Got endpoints: latency-svc-9b7cw [1.337305036s] May 26 21:44:27.198: INFO: Created: latency-svc-dk77v May 26 21:44:27.214: INFO: Got endpoints: latency-svc-dk77v [1.379708032s] May 26 21:44:27.270: INFO: Created: latency-svc-9l7xs May 26 21:44:27.291: INFO: Got endpoints: latency-svc-9l7xs [1.457090493s] May 26 21:44:27.323: INFO: Created: latency-svc-glp65 May 26 21:44:27.337: INFO: Got endpoints: latency-svc-glp65 [1.502959902s] May 26 21:44:27.359: INFO: Created: latency-svc-cdb7d May 26 21:44:27.410: INFO: Got endpoints: latency-svc-cdb7d [1.57628994s] May 26 21:44:27.431: INFO: Created: latency-svc-twbmn May 26 21:44:27.445: INFO: Got endpoints: latency-svc-twbmn [1.611227606s] May 26 21:44:27.485: INFO: Created: latency-svc-s4gqm May 26 21:44:27.505: INFO: Got endpoints: latency-svc-s4gqm [1.671337905s] May 26 21:44:27.560: INFO: Created: latency-svc-6g9rb May 26 21:44:27.566: INFO: Got endpoints: latency-svc-6g9rb [1.732242615s] May 26 21:44:27.592: INFO: Created: latency-svc-fb6w9 May 26 21:44:27.609: INFO: Got endpoints: latency-svc-fb6w9 [1.774184787s] May 26 21:44:27.635: INFO: Created: latency-svc-cbr2m May 26 21:44:27.651: INFO: Got endpoints: latency-svc-cbr2m [1.816875453s] May 26 21:44:27.710: INFO: Created: latency-svc-n74b7 May 26 21:44:27.717: INFO: Got endpoints: latency-svc-n74b7 [1.801993729s] May 26 21:44:27.750: INFO: Created: latency-svc-2295s May 26 21:44:27.763: INFO: Got endpoints: latency-svc-2295s [872.223342ms] May 26 21:44:27.785: INFO: Created: latency-svc-7zqz4 May 26 21:44:27.848: INFO: Got endpoints: latency-svc-7zqz4 [849.369368ms] May 26 21:44:27.862: INFO: Created: latency-svc-n2zfr May 26 21:44:27.872: INFO: Got endpoints: latency-svc-n2zfr [830.130184ms] May 26 21:44:27.923: INFO: Created: latency-svc-j4hng May 26 21:44:27.938: INFO: Got endpoints: latency-svc-j4hng [881.953659ms] May 26 21:44:27.986: INFO: Created: latency-svc-tfg54 May 26 21:44:28.006: INFO: Got endpoints: latency-svc-tfg54 [834.985146ms] May 26 21:44:28.061: INFO: Created: latency-svc-gsfj6 May 26 21:44:28.117: INFO: Got endpoints: latency-svc-gsfj6 [903.707508ms] May 26 21:44:28.162: INFO: Created: latency-svc-zkkpr May 26 21:44:28.179: INFO: Got endpoints: latency-svc-zkkpr [888.108321ms] May 26 21:44:28.210: INFO: Created: latency-svc-8lmfv May 26 21:44:28.267: INFO: Got endpoints: latency-svc-8lmfv [929.971959ms] May 26 21:44:28.296: INFO: Created: latency-svc-wnbnf May 26 21:44:28.313: INFO: Got endpoints: latency-svc-wnbnf [902.1411ms] May 26 21:44:28.342: INFO: Created: latency-svc-vr5zb May 26 21:44:28.367: INFO: Got endpoints: latency-svc-vr5zb [921.37211ms] May 26 21:44:28.421: INFO: Created: latency-svc-xvspn May 26 21:44:28.433: INFO: Got endpoints: latency-svc-xvspn [927.105932ms] May 26 21:44:28.456: INFO: Created: latency-svc-jtwxc May 26 21:44:28.469: INFO: Got endpoints: latency-svc-jtwxc [902.827621ms] May 26 21:44:28.499: INFO: Created: latency-svc-545n2 May 26 21:44:28.548: INFO: Got endpoints: latency-svc-545n2 [939.490151ms] May 26 21:44:28.589: INFO: Created: latency-svc-b26wd May 26 21:44:28.603: INFO: Got endpoints: latency-svc-b26wd [951.683778ms] May 26 21:44:28.698: INFO: Created: latency-svc-lk52s May 26 21:44:28.704: INFO: Got endpoints: latency-svc-lk52s [986.906652ms] May 26 21:44:28.726: INFO: Created: latency-svc-48v7f May 26 21:44:28.735: INFO: Got endpoints: latency-svc-48v7f [971.423027ms] May 26 21:44:28.762: INFO: Created: latency-svc-lp9z6 May 26 21:44:28.777: INFO: Got endpoints: latency-svc-lp9z6 [929.496847ms] May 26 21:44:28.856: INFO: Created: latency-svc-dsbpw May 26 21:44:28.869: INFO: Got endpoints: latency-svc-dsbpw [997.508421ms] May 26 21:44:28.918: INFO: Created: latency-svc-ldtfq May 26 21:44:28.922: INFO: Got endpoints: latency-svc-ldtfq [983.435912ms] May 26 21:44:29.004: INFO: Created: latency-svc-2qlpd May 26 21:44:29.008: INFO: Got endpoints: latency-svc-2qlpd [1.001384288s] May 26 21:44:29.044: INFO: Created: latency-svc-xmk8m May 26 21:44:29.066: INFO: Got endpoints: latency-svc-xmk8m [949.003741ms] May 26 21:44:29.087: INFO: Created: latency-svc-vwf6m May 26 21:44:29.135: INFO: Got endpoints: latency-svc-vwf6m [955.910288ms] May 26 21:44:29.182: INFO: Created: latency-svc-2fxlx May 26 21:44:29.212: INFO: Got endpoints: latency-svc-2fxlx [944.854273ms] May 26 21:44:29.308: INFO: Created: latency-svc-wz2c4 May 26 21:44:29.326: INFO: Got endpoints: latency-svc-wz2c4 [1.013038339s] May 26 21:44:29.362: INFO: Created: latency-svc-mbjzd May 26 21:44:29.422: INFO: Got endpoints: latency-svc-mbjzd [1.055497202s] May 26 21:44:29.451: INFO: Created: latency-svc-8fpr6 May 26 21:44:29.506: INFO: Got endpoints: latency-svc-8fpr6 [1.073206853s] May 26 21:44:29.579: INFO: Created: latency-svc-tvth9 May 26 21:44:29.591: INFO: Got endpoints: latency-svc-tvth9 [1.121484947s] May 26 21:44:29.614: INFO: Created: latency-svc-jvgtm May 26 21:44:29.634: INFO: Got endpoints: latency-svc-jvgtm [1.085188048s] May 26 21:44:29.674: INFO: Created: latency-svc-6rw7n May 26 21:44:29.721: INFO: Got endpoints: latency-svc-6rw7n [1.11885986s] May 26 21:44:29.757: INFO: Created: latency-svc-7h9sw May 26 21:44:29.766: INFO: Got endpoints: latency-svc-7h9sw [1.061713522s] May 26 21:44:29.787: INFO: Created: latency-svc-h5hlc May 26 21:44:29.796: INFO: Got endpoints: latency-svc-h5hlc [1.061756563s] May 26 21:44:29.878: INFO: Created: latency-svc-7hwm5 May 26 21:44:29.881: INFO: Got endpoints: latency-svc-7hwm5 [1.103434473s] May 26 21:44:29.932: INFO: Created: latency-svc-s28vp May 26 21:44:29.947: INFO: Got endpoints: latency-svc-s28vp [1.077944647s] May 26 21:44:30.022: INFO: Created: latency-svc-558gw May 26 21:44:30.065: INFO: Got endpoints: latency-svc-558gw [1.143372071s] May 26 21:44:30.081: INFO: Created: latency-svc-dw987 May 26 21:44:30.099: INFO: Got endpoints: latency-svc-dw987 [1.091683333s] May 26 21:44:30.159: INFO: Created: latency-svc-6qrbc May 26 21:44:30.178: INFO: Got endpoints: latency-svc-6qrbc [1.111167091s] May 26 21:44:30.214: INFO: Created: latency-svc-74hpl May 26 21:44:30.227: INFO: Got endpoints: latency-svc-74hpl [1.091205926s] May 26 21:44:30.255: INFO: Created: latency-svc-vp8bh May 26 21:44:30.296: INFO: Got endpoints: latency-svc-vp8bh [1.084403382s] May 26 21:44:30.303: INFO: Created: latency-svc-4zvq5 May 26 21:44:30.357: INFO: Got endpoints: latency-svc-4zvq5 [1.031308013s] May 26 21:44:30.447: INFO: Created: latency-svc-845f4 May 26 21:44:30.454: INFO: Got endpoints: latency-svc-845f4 [1.032061717s] May 26 21:44:30.491: INFO: Created: latency-svc-99h8n May 26 21:44:30.509: INFO: Got endpoints: latency-svc-99h8n [1.003363294s] May 26 21:44:30.543: INFO: Created: latency-svc-986xk May 26 21:44:30.596: INFO: Got endpoints: latency-svc-986xk [1.005049314s] May 26 21:44:30.639: INFO: Created: latency-svc-cm2ns May 26 21:44:30.654: INFO: Got endpoints: latency-svc-cm2ns [1.02054522s] May 26 21:44:30.693: INFO: Created: latency-svc-m4lnd May 26 21:44:30.752: INFO: Got endpoints: latency-svc-m4lnd [1.030451384s] May 26 21:44:30.778: INFO: Created: latency-svc-n7tm7 May 26 21:44:30.787: INFO: Got endpoints: latency-svc-n7tm7 [1.020781106s] May 26 21:44:30.819: INFO: Created: latency-svc-n6cdz May 26 21:44:30.835: INFO: Got endpoints: latency-svc-n6cdz [1.038345181s] May 26 21:44:30.896: INFO: Created: latency-svc-xjq5k May 26 21:44:30.921: INFO: Got endpoints: latency-svc-xjq5k [1.040505399s] May 26 21:44:31.040: INFO: Created: latency-svc-xhbc4 May 26 21:44:31.043: INFO: Got endpoints: latency-svc-xhbc4 [1.095754831s] May 26 21:44:31.108: INFO: Created: latency-svc-vhqcv May 26 21:44:31.126: INFO: Got endpoints: latency-svc-vhqcv [1.060250092s] May 26 21:44:31.215: INFO: Created: latency-svc-c4dps May 26 21:44:31.257: INFO: Got endpoints: latency-svc-c4dps [1.157725624s] May 26 21:44:31.299: INFO: Created: latency-svc-bvbrh May 26 21:44:31.339: INFO: Got endpoints: latency-svc-bvbrh [1.160801533s] May 26 21:44:31.360: INFO: Created: latency-svc-g94kb May 26 21:44:31.378: INFO: Got endpoints: latency-svc-g94kb [1.151141534s] May 26 21:44:31.407: INFO: Created: latency-svc-487vp May 26 21:44:31.494: INFO: Got endpoints: latency-svc-487vp [1.197686658s] May 26 21:44:31.534: INFO: Created: latency-svc-qcg52 May 26 21:44:31.551: INFO: Got endpoints: latency-svc-qcg52 [1.194184141s] May 26 21:44:31.581: INFO: Created: latency-svc-r47nb May 26 21:44:31.594: INFO: Got endpoints: latency-svc-r47nb [1.139276378s] May 26 21:44:31.656: INFO: Created: latency-svc-hzsd5 May 26 21:44:31.666: INFO: Got endpoints: latency-svc-hzsd5 [1.156782323s] May 26 21:44:31.688: INFO: Created: latency-svc-qc95v May 26 21:44:31.727: INFO: Got endpoints: latency-svc-qc95v [1.130696099s] May 26 21:44:31.824: INFO: Created: latency-svc-q89pz May 26 21:44:31.827: INFO: Got endpoints: latency-svc-q89pz [1.172604634s] May 26 21:44:31.887: INFO: Created: latency-svc-dkxgt May 26 21:44:31.908: INFO: Got endpoints: latency-svc-dkxgt [1.155537775s] May 26 21:44:31.974: INFO: Created: latency-svc-tw2mn May 26 21:44:32.000: INFO: Got endpoints: latency-svc-tw2mn [1.213313783s] May 26 21:44:32.001: INFO: Created: latency-svc-dxklz May 26 21:44:32.016: INFO: Got endpoints: latency-svc-dxklz [1.181089473s] May 26 21:44:32.049: INFO: Created: latency-svc-552cd May 26 21:44:32.064: INFO: Got endpoints: latency-svc-552cd [1.142282274s] May 26 21:44:32.112: INFO: Created: latency-svc-5src5 May 26 21:44:32.130: INFO: Got endpoints: latency-svc-5src5 [1.087437045s] May 26 21:44:32.169: INFO: Created: latency-svc-dxkft May 26 21:44:32.179: INFO: Got endpoints: latency-svc-dxkft [1.052845165s] May 26 21:44:32.198: INFO: Created: latency-svc-hzxn8 May 26 21:44:32.260: INFO: Got endpoints: latency-svc-hzxn8 [1.003115916s] May 26 21:44:32.288: INFO: Created: latency-svc-297qh May 26 21:44:32.306: INFO: Got endpoints: latency-svc-297qh [966.891826ms] May 26 21:44:32.331: INFO: Created: latency-svc-tzmb4 May 26 21:44:32.348: INFO: Got endpoints: latency-svc-tzmb4 [970.04794ms] May 26 21:44:32.429: INFO: Created: latency-svc-76tsk May 26 21:44:32.438: INFO: Got endpoints: latency-svc-76tsk [944.331232ms] May 26 21:44:32.468: INFO: Created: latency-svc-5sczw May 26 21:44:32.493: INFO: Got endpoints: latency-svc-5sczw [941.544227ms] May 26 21:44:32.522: INFO: Created: latency-svc-gt4xd May 26 21:44:32.584: INFO: Got endpoints: latency-svc-gt4xd [990.487468ms] May 26 21:44:32.594: INFO: Created: latency-svc-f9vzd May 26 21:44:32.648: INFO: Got endpoints: latency-svc-f9vzd [982.25832ms] May 26 21:44:32.723: INFO: Created: latency-svc-ws9sp May 26 21:44:32.728: INFO: Got endpoints: latency-svc-ws9sp [1.001485043s] May 26 21:44:32.762: INFO: Created: latency-svc-58hgs May 26 21:44:32.776: INFO: Got endpoints: latency-svc-58hgs [949.464797ms] May 26 21:44:32.798: INFO: Created: latency-svc-j25t5 May 26 21:44:32.813: INFO: Got endpoints: latency-svc-j25t5 [905.629332ms] May 26 21:44:32.871: INFO: Created: latency-svc-j2pdq May 26 21:44:32.875: INFO: Got endpoints: latency-svc-j2pdq [874.766234ms] May 26 21:44:32.931: INFO: Created: latency-svc-srqxg May 26 21:44:32.951: INFO: Got endpoints: latency-svc-srqxg [935.394444ms] May 26 21:44:33.028: INFO: Created: latency-svc-pwnrs May 26 21:44:33.036: INFO: Got endpoints: latency-svc-pwnrs [972.203883ms] May 26 21:44:33.062: INFO: Created: latency-svc-b4wg8 May 26 21:44:33.078: INFO: Got endpoints: latency-svc-b4wg8 [947.521925ms] May 26 21:44:33.116: INFO: Created: latency-svc-nznfv May 26 21:44:33.177: INFO: Got endpoints: latency-svc-nznfv [998.362674ms] May 26 21:44:33.205: INFO: Created: latency-svc-4khgf May 26 21:44:33.223: INFO: Got endpoints: latency-svc-4khgf [962.718269ms] May 26 21:44:33.248: INFO: Created: latency-svc-cn75f May 26 21:44:33.265: INFO: Got endpoints: latency-svc-cn75f [959.705336ms] May 26 21:44:33.326: INFO: Created: latency-svc-nxlc6 May 26 21:44:33.350: INFO: Got endpoints: latency-svc-nxlc6 [1.001827895s] May 26 21:44:33.380: INFO: Created: latency-svc-zrtlf May 26 21:44:33.404: INFO: Got endpoints: latency-svc-zrtlf [965.797616ms] May 26 21:44:33.464: INFO: Created: latency-svc-n2qz9 May 26 21:44:33.470: INFO: Got endpoints: latency-svc-n2qz9 [977.254352ms] May 26 21:44:33.505: INFO: Created: latency-svc-8c8kf May 26 21:44:33.525: INFO: Got endpoints: latency-svc-8c8kf [940.661736ms] May 26 21:44:33.547: INFO: Created: latency-svc-h6x9l May 26 21:44:33.561: INFO: Got endpoints: latency-svc-h6x9l [912.990337ms] May 26 21:44:33.620: INFO: Created: latency-svc-k49ng May 26 21:44:33.627: INFO: Got endpoints: latency-svc-k49ng [898.871361ms] May 26 21:44:33.680: INFO: Created: latency-svc-nzn52 May 26 21:44:33.695: INFO: Got endpoints: latency-svc-nzn52 [918.551796ms] May 26 21:44:33.776: INFO: Created: latency-svc-hnrdl May 26 21:44:33.791: INFO: Got endpoints: latency-svc-hnrdl [978.098793ms] May 26 21:44:33.860: INFO: Created: latency-svc-zqx58 May 26 21:44:33.875: INFO: Got endpoints: latency-svc-zqx58 [1.000630714s] May 26 21:44:33.935: INFO: Created: latency-svc-zbgm8 May 26 21:44:33.942: INFO: Got endpoints: latency-svc-zbgm8 [990.479544ms] May 26 21:44:33.998: INFO: Created: latency-svc-kkb9w May 26 21:44:34.015: INFO: Got endpoints: latency-svc-kkb9w [979.282789ms] May 26 21:44:34.076: INFO: Created: latency-svc-sg2jw May 26 21:44:34.099: INFO: Got endpoints: latency-svc-sg2jw [1.021397515s] May 26 21:44:34.100: INFO: Created: latency-svc-dsgd2 May 26 21:44:34.117: INFO: Got endpoints: latency-svc-dsgd2 [940.289541ms] May 26 21:44:34.147: INFO: Created: latency-svc-wfnz4 May 26 21:44:34.166: INFO: Got endpoints: latency-svc-wfnz4 [942.791687ms] May 26 21:44:34.231: INFO: Created: latency-svc-xnqgz May 26 21:44:34.257: INFO: Created: latency-svc-bl9jq May 26 21:44:34.257: INFO: Got endpoints: latency-svc-xnqgz [992.017374ms] May 26 21:44:34.275: INFO: Got endpoints: latency-svc-bl9jq [925.025032ms] May 26 21:44:34.393: INFO: Created: latency-svc-7dn49 May 26 21:44:34.435: INFO: Got endpoints: latency-svc-7dn49 [1.030971401s] May 26 21:44:34.436: INFO: Created: latency-svc-kqbn7 May 26 21:44:34.449: INFO: Got endpoints: latency-svc-kqbn7 [979.010624ms] May 26 21:44:34.478: INFO: Created: latency-svc-8282x May 26 21:44:34.494: INFO: Got endpoints: latency-svc-8282x [969.001982ms] May 26 21:44:34.537: INFO: Created: latency-svc-q9xrg May 26 21:44:34.567: INFO: Got endpoints: latency-svc-q9xrg [1.005912606s] May 26 21:44:34.604: INFO: Created: latency-svc-v8lt7 May 26 21:44:34.613: INFO: Got endpoints: latency-svc-v8lt7 [985.793241ms] May 26 21:44:34.656: INFO: Created: latency-svc-bwkcj May 26 21:44:34.661: INFO: Got endpoints: latency-svc-bwkcj [966.211023ms] May 26 21:44:34.687: INFO: Created: latency-svc-98cbz May 26 21:44:34.717: INFO: Got endpoints: latency-svc-98cbz [926.004153ms] May 26 21:44:34.749: INFO: Created: latency-svc-fkk6h May 26 21:44:34.788: INFO: Got endpoints: latency-svc-fkk6h [912.18364ms] May 26 21:44:34.801: INFO: Created: latency-svc-7r9vw May 26 21:44:34.812: INFO: Got endpoints: latency-svc-7r9vw [869.876466ms] May 26 21:44:34.838: INFO: Created: latency-svc-rp78c May 26 21:44:34.863: INFO: Got endpoints: latency-svc-rp78c [847.428355ms] May 26 21:44:34.931: INFO: Created: latency-svc-tsg5n May 26 21:44:34.982: INFO: Got endpoints: latency-svc-tsg5n [882.189864ms] May 26 21:44:34.982: INFO: Created: latency-svc-bdpmz May 26 21:44:35.000: INFO: Got endpoints: latency-svc-bdpmz [882.348236ms] May 26 21:44:35.023: INFO: Created: latency-svc-ll8cw May 26 21:44:35.075: INFO: Got endpoints: latency-svc-ll8cw [909.051072ms] May 26 21:44:35.096: INFO: Created: latency-svc-rfvsj May 26 21:44:35.115: INFO: Got endpoints: latency-svc-rfvsj [857.089973ms] May 26 21:44:35.137: INFO: Created: latency-svc-7zrt7 May 26 21:44:35.169: INFO: Got endpoints: latency-svc-7zrt7 [894.533967ms] May 26 21:44:35.274: INFO: Created: latency-svc-mbt84 May 26 21:44:35.293: INFO: Got endpoints: latency-svc-mbt84 [857.519438ms] May 26 21:44:35.330: INFO: Created: latency-svc-rwnh9 May 26 21:44:35.356: INFO: Got endpoints: latency-svc-rwnh9 [906.609086ms] May 26 21:44:35.453: INFO: Created: latency-svc-2kkbm May 26 21:44:35.456: INFO: Got endpoints: latency-svc-2kkbm [961.659129ms] May 26 21:44:35.551: INFO: Created: latency-svc-hzf9p May 26 21:44:35.584: INFO: Got endpoints: latency-svc-hzf9p [1.016795152s] May 26 21:44:35.611: INFO: Created: latency-svc-hxg57 May 26 21:44:35.621: INFO: Got endpoints: latency-svc-hxg57 [1.007982299s] May 26 21:44:35.659: INFO: Created: latency-svc-78bz8 May 26 21:44:35.681: INFO: Got endpoints: latency-svc-78bz8 [1.020035226s] May 26 21:44:35.773: INFO: Created: latency-svc-rzc6f May 26 21:44:35.802: INFO: Got endpoints: latency-svc-rzc6f [1.084122064s] May 26 21:44:35.827: INFO: Created: latency-svc-tdcmt May 26 21:44:35.844: INFO: Got endpoints: latency-svc-tdcmt [1.056345328s] May 26 21:44:35.902: INFO: Created: latency-svc-zzzrr May 26 21:44:35.917: INFO: Got endpoints: latency-svc-zzzrr [1.105546164s] May 26 21:44:35.947: INFO: Created: latency-svc-pplh4 May 26 21:44:35.965: INFO: Got endpoints: latency-svc-pplh4 [1.101977761s] May 26 21:44:35.989: INFO: Created: latency-svc-7rjh2 May 26 21:44:36.039: INFO: Got endpoints: latency-svc-7rjh2 [1.057410673s] May 26 21:44:36.060: INFO: Created: latency-svc-4dcb5 May 26 21:44:36.091: INFO: Got endpoints: latency-svc-4dcb5 [1.091082076s] May 26 21:44:36.121: INFO: Created: latency-svc-ksrzq May 26 21:44:36.165: INFO: Got endpoints: latency-svc-ksrzq [1.089908526s] May 26 21:44:36.205: INFO: Created: latency-svc-nfv8l May 26 21:44:36.218: INFO: Got endpoints: latency-svc-nfv8l [1.103708564s] May 26 21:44:36.241: INFO: Created: latency-svc-wphnw May 26 21:44:36.303: INFO: Got endpoints: latency-svc-wphnw [1.133165207s] May 26 21:44:36.330: INFO: Created: latency-svc-ffz7z May 26 21:44:36.339: INFO: Got endpoints: latency-svc-ffz7z [1.04604712s] May 26 21:44:36.361: INFO: Created: latency-svc-7dwhg May 26 21:44:36.370: INFO: Got endpoints: latency-svc-7dwhg [1.013733583s] May 26 21:44:36.476: INFO: Created: latency-svc-hbrfw May 26 21:44:36.479: INFO: Got endpoints: latency-svc-hbrfw [1.022952553s] May 26 21:44:36.529: INFO: Created: latency-svc-qwm6h May 26 21:44:36.546: INFO: Got endpoints: latency-svc-qwm6h [961.268011ms] May 26 21:44:36.571: INFO: Created: latency-svc-ctq78 May 26 21:44:36.632: INFO: Got endpoints: latency-svc-ctq78 [1.011365829s] May 26 21:44:36.655: INFO: Created: latency-svc-hmjwk May 26 21:44:36.675: INFO: Got endpoints: latency-svc-hmjwk [993.316718ms] May 26 21:44:36.696: INFO: Created: latency-svc-5q4bq May 26 21:44:36.710: INFO: Got endpoints: latency-svc-5q4bq [908.500157ms] May 26 21:44:36.770: INFO: Created: latency-svc-mrmts May 26 21:44:36.773: INFO: Got endpoints: latency-svc-mrmts [929.106754ms] May 26 21:44:36.926: INFO: Created: latency-svc-nkc8n May 26 21:44:36.928: INFO: Got endpoints: latency-svc-nkc8n [1.010262329s] May 26 21:44:36.984: INFO: Created: latency-svc-7ddnt May 26 21:44:36.999: INFO: Got endpoints: latency-svc-7ddnt [1.033633564s] May 26 21:44:37.075: INFO: Created: latency-svc-r5tf8 May 26 21:44:37.105: INFO: Got endpoints: latency-svc-r5tf8 [1.066061481s] May 26 21:44:37.106: INFO: Created: latency-svc-spzd6 May 26 21:44:37.129: INFO: Got endpoints: latency-svc-spzd6 [1.038520703s] May 26 21:44:37.159: INFO: Created: latency-svc-bhkst May 26 21:44:37.168: INFO: Got endpoints: latency-svc-bhkst [1.00238806s] May 26 21:44:37.261: INFO: Created: latency-svc-hpkdp May 26 21:44:37.263: INFO: Got endpoints: latency-svc-hpkdp [1.045177035s] May 26 21:44:37.321: INFO: Created: latency-svc-8bc45 May 26 21:44:37.330: INFO: Got endpoints: latency-svc-8bc45 [1.027617684s] May 26 21:44:37.357: INFO: Created: latency-svc-w2l95 May 26 21:44:37.411: INFO: Got endpoints: latency-svc-w2l95 [1.071677138s] May 26 21:44:37.422: INFO: Created: latency-svc-dmtz8 May 26 21:44:37.439: INFO: Got endpoints: latency-svc-dmtz8 [1.069145561s] May 26 21:44:37.470: INFO: Created: latency-svc-txfkh May 26 21:44:37.488: INFO: Got endpoints: latency-svc-txfkh [1.008881845s] May 26 21:44:37.548: INFO: Created: latency-svc-9jxjb May 26 21:44:37.554: INFO: Got endpoints: latency-svc-9jxjb [1.008490011s] May 26 21:44:37.585: INFO: Created: latency-svc-plhqp May 26 21:44:37.597: INFO: Got endpoints: latency-svc-plhqp [964.777125ms] May 26 21:44:37.627: INFO: Created: latency-svc-9vjb9 May 26 21:44:37.639: INFO: Got endpoints: latency-svc-9vjb9 [963.989766ms] May 26 21:44:37.704: INFO: Created: latency-svc-vgnn7 May 26 21:44:37.713: INFO: Got endpoints: latency-svc-vgnn7 [1.002913127s] May 26 21:44:37.741: INFO: Created: latency-svc-6288m May 26 21:44:37.766: INFO: Got endpoints: latency-svc-6288m [992.720861ms] May 26 21:44:37.801: INFO: Created: latency-svc-s4zj6 May 26 21:44:37.848: INFO: Got endpoints: latency-svc-s4zj6 [920.115215ms] May 26 21:44:37.854: INFO: Created: latency-svc-p2jq7 May 26 21:44:37.868: INFO: Got endpoints: latency-svc-p2jq7 [869.537616ms] May 26 21:44:37.891: INFO: Created: latency-svc-jwlw4 May 26 21:44:37.899: INFO: Got endpoints: latency-svc-jwlw4 [793.877312ms] May 26 21:44:37.920: INFO: Created: latency-svc-z47x8 May 26 21:44:37.935: INFO: Got endpoints: latency-svc-z47x8 [805.942668ms] May 26 21:44:37.991: INFO: Created: latency-svc-48txf May 26 21:44:37.995: INFO: Got endpoints: latency-svc-48txf [827.445698ms] May 26 21:44:38.034: INFO: Created: latency-svc-d8vzr May 26 21:44:38.064: INFO: Got endpoints: latency-svc-d8vzr [800.769132ms] May 26 21:44:38.118: INFO: Created: latency-svc-h9n2f May 26 21:44:38.134: INFO: Got endpoints: latency-svc-h9n2f [804.183576ms] May 26 21:44:38.160: INFO: Created: latency-svc-764xn May 26 21:44:38.177: INFO: Got endpoints: latency-svc-764xn [766.182743ms] May 26 21:44:38.262: INFO: Created: latency-svc-25dcc May 26 21:44:38.263: INFO: Got endpoints: latency-svc-25dcc [823.949331ms] May 26 21:44:38.329: INFO: Created: latency-svc-qq4gk May 26 21:44:38.358: INFO: Got endpoints: latency-svc-qq4gk [870.176834ms] May 26 21:44:38.429: INFO: Created: latency-svc-swphs May 26 21:44:38.436: INFO: Got endpoints: latency-svc-swphs [881.957394ms] May 26 21:44:38.479: INFO: Created: latency-svc-jcjct May 26 21:44:38.527: INFO: Got endpoints: latency-svc-jcjct [929.394799ms] May 26 21:44:38.596: INFO: Created: latency-svc-8gw6n May 26 21:44:38.617: INFO: Got endpoints: latency-svc-8gw6n [978.104189ms] May 26 21:44:38.740: INFO: Created: latency-svc-krrxt May 26 21:44:38.761: INFO: Got endpoints: latency-svc-krrxt [1.04795712s] May 26 21:44:38.791: INFO: Created: latency-svc-6zzns May 26 21:44:38.803: INFO: Got endpoints: latency-svc-6zzns [1.037243717s] May 26 21:44:38.833: INFO: Created: latency-svc-llj5z May 26 21:44:38.883: INFO: Got endpoints: latency-svc-llj5z [1.035566906s] May 26 21:44:38.905: INFO: Created: latency-svc-fccn9 May 26 21:44:38.948: INFO: Got endpoints: latency-svc-fccn9 [1.07999354s] May 26 21:44:39.034: INFO: Created: latency-svc-98p2v May 26 21:44:39.044: INFO: Got endpoints: latency-svc-98p2v [1.145175833s] May 26 21:44:39.072: INFO: Created: latency-svc-bdgt6 May 26 21:44:39.105: INFO: Got endpoints: latency-svc-bdgt6 [1.169627395s] May 26 21:44:39.195: INFO: Created: latency-svc-gfdvg May 26 21:44:39.201: INFO: Got endpoints: latency-svc-gfdvg [1.205724397s] May 26 21:44:39.228: INFO: Created: latency-svc-b2jm8 May 26 21:44:39.244: INFO: Got endpoints: latency-svc-b2jm8 [1.179280938s] May 26 21:44:39.276: INFO: Created: latency-svc-rt84w May 26 21:44:39.370: INFO: Got endpoints: latency-svc-rt84w [1.235042619s] May 26 21:44:39.372: INFO: Created: latency-svc-xp6td May 26 21:44:39.377: INFO: Got endpoints: latency-svc-xp6td [1.199530272s] May 26 21:44:39.397: INFO: Created: latency-svc-zvwwp May 26 21:44:39.414: INFO: Got endpoints: latency-svc-zvwwp [1.150514919s] May 26 21:44:39.432: INFO: Created: latency-svc-n9fx9 May 26 21:44:39.449: INFO: Got endpoints: latency-svc-n9fx9 [1.091528251s] May 26 21:44:39.511: INFO: Created: latency-svc-l7zjj May 26 21:44:39.534: INFO: Got endpoints: latency-svc-l7zjj [1.097907473s] May 26 21:44:39.565: INFO: Created: latency-svc-nhsdn May 26 21:44:39.582: INFO: Got endpoints: latency-svc-nhsdn [1.055119039s] May 26 21:44:39.662: INFO: Created: latency-svc-jrkwt May 26 21:44:39.665: INFO: Got endpoints: latency-svc-jrkwt [1.048112564s] May 26 21:44:39.750: INFO: Created: latency-svc-fnmvv May 26 21:44:39.841: INFO: Got endpoints: latency-svc-fnmvv [1.080154321s] May 26 21:44:39.844: INFO: Created: latency-svc-2tbjc May 26 21:44:39.865: INFO: Got endpoints: latency-svc-2tbjc [1.06195224s] May 26 21:44:39.919: INFO: Created: latency-svc-g4lzt May 26 21:44:40.063: INFO: Got endpoints: latency-svc-g4lzt [1.179676099s] May 26 21:44:40.067: INFO: Created: latency-svc-8dfrr May 26 21:44:40.118: INFO: Got endpoints: latency-svc-8dfrr [1.169454018s] May 26 21:44:40.159: INFO: Created: latency-svc-88qbb May 26 21:44:40.207: INFO: Got endpoints: latency-svc-88qbb [1.162554363s] May 26 21:44:40.218: INFO: Created: latency-svc-n424r May 26 21:44:40.239: INFO: Got endpoints: latency-svc-n424r [1.133787289s] May 26 21:44:40.239: INFO: Latencies: [81.158828ms 766.182743ms 793.877312ms 800.769132ms 804.183576ms 805.942668ms 823.949331ms 827.445698ms 830.130184ms 834.985146ms 847.428355ms 849.369368ms 857.089973ms 857.519438ms 869.537616ms 869.876466ms 870.176834ms 872.223342ms 874.766234ms 881.953659ms 881.957394ms 882.189864ms 882.348236ms 888.108321ms 894.533967ms 898.871361ms 902.1411ms 902.827621ms 903.707508ms 905.629332ms 906.609086ms 908.500157ms 909.051072ms 912.18364ms 912.990337ms 918.551796ms 920.115215ms 921.37211ms 925.025032ms 926.004153ms 927.105932ms 929.106754ms 929.394799ms 929.496847ms 929.971959ms 935.394444ms 939.490151ms 940.289541ms 940.661736ms 941.544227ms 942.791687ms 944.331232ms 944.854273ms 947.521925ms 949.003741ms 949.464797ms 951.683778ms 955.910288ms 959.705336ms 961.268011ms 961.659129ms 962.718269ms 963.989766ms 964.777125ms 965.797616ms 966.211023ms 966.891826ms 969.001982ms 970.04794ms 971.423027ms 972.203883ms 977.254352ms 978.098793ms 978.104189ms 979.010624ms 979.282789ms 982.25832ms 983.435912ms 985.793241ms 986.906652ms 990.479544ms 990.487468ms 992.017374ms 992.720861ms 993.316718ms 997.508421ms 998.362674ms 1.000630714s 1.001384288s 1.001485043s 1.001827895s 1.00238806s 1.002913127s 1.003115916s 1.003363294s 1.005049314s 1.005912606s 1.007982299s 1.008490011s 1.008881845s 1.010262329s 1.011365829s 1.013038339s 1.013733583s 1.016795152s 1.020035226s 1.02054522s 1.020781106s 1.021397515s 1.022952553s 1.027617684s 1.030451384s 1.030971401s 1.031308013s 1.032061717s 1.033633564s 1.035566906s 1.037243717s 1.038345181s 1.038520703s 1.040505399s 1.045177035s 1.04604712s 1.04795712s 1.048112564s 1.052845165s 1.055119039s 1.055497202s 1.056149399s 1.056345328s 1.057410673s 1.060250092s 1.061713522s 1.061756563s 1.06195224s 1.066061481s 1.069145561s 1.071677138s 1.073206853s 1.077944647s 1.07999354s 1.080154321s 1.084122064s 1.084403382s 1.085188048s 1.087437045s 1.089908526s 1.091082076s 1.091205926s 1.091528251s 1.091683333s 1.095754831s 1.097907473s 1.101977761s 1.103434473s 1.103708564s 1.105546164s 1.111167091s 1.11885986s 1.121484947s 1.130696099s 1.133165207s 1.133787289s 1.139276378s 1.142282274s 1.143372071s 1.145175833s 1.150514919s 1.151141534s 1.155537775s 1.156782323s 1.157725624s 1.160801533s 1.162554363s 1.164564818s 1.169454018s 1.169627395s 1.172604634s 1.179280938s 1.179676099s 1.181089473s 1.194184141s 1.197686658s 1.199530272s 1.205724397s 1.207390159s 1.213313783s 1.222332848s 1.235042619s 1.337305036s 1.379708032s 1.457090493s 1.502959902s 1.57628994s 1.611227606s 1.671337905s 1.732242615s 1.774184787s 1.801993729s 1.816875453s] May 26 21:44:40.239: INFO: 50 %ile: 1.010262329s May 26 21:44:40.239: INFO: 90 %ile: 1.181089473s May 26 21:44:40.239: INFO: 99 %ile: 1.801993729s May 26 21:44:40.239: INFO: Total sample count: 200 [AfterEach] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:44:40.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "svc-latency-2467" for this suite. • [SLOW TEST:18.753 seconds] [sig-network] Service endpoints latency /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should not be very high [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Service endpoints latency should not be very high [Conformance]","total":278,"completed":126,"skipped":2103,"failed":0} SSSSSSSSS ------------------------------ [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:44:40.276: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 26 21:44:44.888: INFO: Successfully updated pod "pod-update-activedeadlineseconds-b321a53a-3064-4323-84f1-906470d1eb1f" May 26 21:44:44.888: INFO: Waiting up to 5m0s for pod "pod-update-activedeadlineseconds-b321a53a-3064-4323-84f1-906470d1eb1f" in namespace "pods-1023" to be "terminated due to deadline exceeded" May 26 21:44:44.897: INFO: Pod "pod-update-activedeadlineseconds-b321a53a-3064-4323-84f1-906470d1eb1f": Phase="Running", Reason="", readiness=true. Elapsed: 9.602109ms May 26 21:44:47.101: INFO: Pod "pod-update-activedeadlineseconds-b321a53a-3064-4323-84f1-906470d1eb1f": Phase="Failed", Reason="DeadlineExceeded", readiness=false. Elapsed: 2.213351338s May 26 21:44:47.101: INFO: Pod "pod-update-activedeadlineseconds-b321a53a-3064-4323-84f1-906470d1eb1f" satisfied condition "terminated due to deadline exceeded" [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:44:47.101: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-1023" for this suite. • [SLOW TEST:6.982 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should allow activeDeadlineSeconds to be updated [NodeConformance] [Conformance]","total":278,"completed":127,"skipped":2112,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:44:47.258: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: setting up watch STEP: submitting the pod to kubernetes May 26 21:44:47.464: INFO: observed the pod list STEP: verifying the pod is in kubernetes STEP: verifying pod creation was observed STEP: deleting the pod gracefully STEP: verifying the kubelet observed the termination notice May 26 21:44:56.728: INFO: no pod exists with the name we were looking for, assuming the termination request was observed and completed STEP: verifying pod deletion was observed [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:44:56.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-6170" for this suite. • [SLOW TEST:9.520 seconds] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be submitted and removed [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Pods should be submitted and removed [NodeConformance] [Conformance]","total":278,"completed":128,"skipped":2122,"failed":0} [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:44:56.778: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a simple DaemonSet "daemon-set" STEP: Check that daemon pods launch on every node of the cluster. May 26 21:44:57.219: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:44:57.254: INFO: Number of nodes with available pods: 0 May 26 21:44:57.254: INFO: Node jerma-worker is running more than one daemon pod May 26 21:44:58.339: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:44:58.422: INFO: Number of nodes with available pods: 0 May 26 21:44:58.422: INFO: Node jerma-worker is running more than one daemon pod May 26 21:44:59.645: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:44:59.836: INFO: Number of nodes with available pods: 0 May 26 21:44:59.836: INFO: Node jerma-worker is running more than one daemon pod May 26 21:45:00.316: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:00.500: INFO: Number of nodes with available pods: 0 May 26 21:45:00.500: INFO: Node jerma-worker is running more than one daemon pod May 26 21:45:01.317: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:01.352: INFO: Number of nodes with available pods: 1 May 26 21:45:01.352: INFO: Node jerma-worker2 is running more than one daemon pod May 26 21:45:02.258: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:02.269: INFO: Number of nodes with available pods: 2 May 26 21:45:02.269: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Set a daemon pod's phase to 'Failed', check that the daemon pod is revived. May 26 21:45:02.384: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:02.419: INFO: Number of nodes with available pods: 1 May 26 21:45:02.419: INFO: Node jerma-worker is running more than one daemon pod May 26 21:45:03.449: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:03.452: INFO: Number of nodes with available pods: 1 May 26 21:45:03.452: INFO: Node jerma-worker is running more than one daemon pod May 26 21:45:04.485: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:04.562: INFO: Number of nodes with available pods: 1 May 26 21:45:04.562: INFO: Node jerma-worker is running more than one daemon pod May 26 21:45:05.445: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:05.477: INFO: Number of nodes with available pods: 1 May 26 21:45:05.477: INFO: Node jerma-worker is running more than one daemon pod May 26 21:45:06.441: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:06.451: INFO: Number of nodes with available pods: 2 May 26 21:45:06.451: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Wait for the failed daemon pod to be completely deleted. [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-6077, will wait for the garbage collector to delete the pods May 26 21:45:06.618: INFO: Deleting DaemonSet.extensions daemon-set took: 62.125495ms May 26 21:45:06.918: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.224475ms May 26 21:45:19.321: INFO: Number of nodes with available pods: 0 May 26 21:45:19.321: INFO: Number of running nodes: 0, number of available pods: 0 May 26 21:45:19.324: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-6077/daemonsets","resourceVersion":"19384593"},"items":null} May 26 21:45:19.327: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-6077/pods","resourceVersion":"19384593"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:45:19.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-6077" for this suite. • [SLOW TEST:22.564 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should retry creating failed daemon pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should retry creating failed daemon pods [Conformance]","total":278,"completed":129,"skipped":2122,"failed":0} [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:45:19.342: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating all guestbook components May 26 21:45:19.388: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-slave labels: app: agnhost role: slave tier: backend spec: ports: - port: 6379 selector: app: agnhost role: slave tier: backend May 26 21:45:19.388: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4841' May 26 21:45:19.793: INFO: stderr: "" May 26 21:45:19.793: INFO: stdout: "service/agnhost-slave created\n" May 26 21:45:19.793: INFO: apiVersion: v1 kind: Service metadata: name: agnhost-master labels: app: agnhost role: master tier: backend spec: ports: - port: 6379 targetPort: 6379 selector: app: agnhost role: master tier: backend May 26 21:45:19.793: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4841' May 26 21:45:20.141: INFO: stderr: "" May 26 21:45:20.141: INFO: stdout: "service/agnhost-master created\n" May 26 21:45:20.141: INFO: apiVersion: v1 kind: Service metadata: name: frontend labels: app: guestbook tier: frontend spec: # if your cluster supports it, uncomment the following to automatically create # an external load-balanced IP for the frontend service. # type: LoadBalancer ports: - port: 80 selector: app: guestbook tier: frontend May 26 21:45:20.141: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4841' May 26 21:45:20.501: INFO: stderr: "" May 26 21:45:20.501: INFO: stdout: "service/frontend created\n" May 26 21:45:20.502: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: frontend spec: replicas: 3 selector: matchLabels: app: guestbook tier: frontend template: metadata: labels: app: guestbook tier: frontend spec: containers: - name: guestbook-frontend image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--backend-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 80 May 26 21:45:20.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4841' May 26 21:45:20.765: INFO: stderr: "" May 26 21:45:20.765: INFO: stdout: "deployment.apps/frontend created\n" May 26 21:45:20.765: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-master spec: replicas: 1 selector: matchLabels: app: agnhost role: master tier: backend template: metadata: labels: app: agnhost role: master tier: backend spec: containers: - name: master image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 26 21:45:20.766: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4841' May 26 21:45:21.101: INFO: stderr: "" May 26 21:45:21.101: INFO: stdout: "deployment.apps/agnhost-master created\n" May 26 21:45:21.101: INFO: apiVersion: apps/v1 kind: Deployment metadata: name: agnhost-slave spec: replicas: 2 selector: matchLabels: app: agnhost role: slave tier: backend template: metadata: labels: app: agnhost role: slave tier: backend spec: containers: - name: slave image: gcr.io/kubernetes-e2e-test-images/agnhost:2.8 args: [ "guestbook", "--slaveof", "agnhost-master", "--http-port", "6379" ] resources: requests: cpu: 100m memory: 100Mi ports: - containerPort: 6379 May 26 21:45:21.101: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-4841' May 26 21:45:21.378: INFO: stderr: "" May 26 21:45:21.378: INFO: stdout: "deployment.apps/agnhost-slave created\n" STEP: validating guestbook app May 26 21:45:21.378: INFO: Waiting for all frontend pods to be Running. May 26 21:45:31.429: INFO: Waiting for frontend to serve content. May 26 21:45:31.440: INFO: Trying to add a new entry to the guestbook. May 26 21:45:31.451: INFO: Verifying that added entry can be retrieved. STEP: using delete to clean up resources May 26 21:45:31.471: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4841' May 26 21:45:31.634: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 21:45:31.634: INFO: stdout: "service \"agnhost-slave\" force deleted\n" STEP: using delete to clean up resources May 26 21:45:31.635: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4841' May 26 21:45:31.790: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 21:45:31.790: INFO: stdout: "service \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 26 21:45:31.790: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4841' May 26 21:45:31.920: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 21:45:31.920: INFO: stdout: "service \"frontend\" force deleted\n" STEP: using delete to clean up resources May 26 21:45:31.920: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4841' May 26 21:45:32.033: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 21:45:32.033: INFO: stdout: "deployment.apps \"frontend\" force deleted\n" STEP: using delete to clean up resources May 26 21:45:32.033: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4841' May 26 21:45:32.161: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 21:45:32.161: INFO: stdout: "deployment.apps \"agnhost-master\" force deleted\n" STEP: using delete to clean up resources May 26 21:45:32.161: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-4841' May 26 21:45:32.292: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 21:45:32.292: INFO: stdout: "deployment.apps \"agnhost-slave\" force deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:45:32.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-4841" for this suite. • [SLOW TEST:12.992 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Guestbook application /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:380 should create and stop a working application [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Guestbook application should create and stop a working application [Conformance]","total":278,"completed":130,"skipped":2122,"failed":0} SS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:45:32.334: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-map-f200eec5-571b-4da6-b7f1-62e0d9a75749 STEP: Creating a pod to test consume configMaps May 26 21:45:32.456: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-ecbd9543-f09a-4382-85c7-a3aa041c0fca" in namespace "projected-9463" to be "success or failure" May 26 21:45:32.622: INFO: Pod "pod-projected-configmaps-ecbd9543-f09a-4382-85c7-a3aa041c0fca": Phase="Pending", Reason="", readiness=false. Elapsed: 166.024444ms May 26 21:45:34.729: INFO: Pod "pod-projected-configmaps-ecbd9543-f09a-4382-85c7-a3aa041c0fca": Phase="Pending", Reason="", readiness=false. Elapsed: 2.272641846s May 26 21:45:36.732: INFO: Pod "pod-projected-configmaps-ecbd9543-f09a-4382-85c7-a3aa041c0fca": Phase="Pending", Reason="", readiness=false. Elapsed: 4.276011585s May 26 21:45:38.741: INFO: Pod "pod-projected-configmaps-ecbd9543-f09a-4382-85c7-a3aa041c0fca": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.284532991s STEP: Saw pod success May 26 21:45:38.741: INFO: Pod "pod-projected-configmaps-ecbd9543-f09a-4382-85c7-a3aa041c0fca" satisfied condition "success or failure" May 26 21:45:38.743: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-configmaps-ecbd9543-f09a-4382-85c7-a3aa041c0fca container projected-configmap-volume-test: STEP: delete the pod May 26 21:45:38.762: INFO: Waiting for pod pod-projected-configmaps-ecbd9543-f09a-4382-85c7-a3aa041c0fca to disappear May 26 21:45:38.765: INFO: Pod pod-projected-configmaps-ecbd9543-f09a-4382-85c7-a3aa041c0fca no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:45:38.765: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9463" for this suite. • [SLOW TEST:6.454 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with mappings as non-root [NodeConformance] [Conformance]","total":278,"completed":131,"skipped":2124,"failed":0} SSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:45:38.789: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 21:45:38.887: INFO: Waiting up to 5m0s for pod "downwardapi-volume-4171469e-5042-4c12-a277-b35794c85316" in namespace "downward-api-7836" to be "success or failure" May 26 21:45:38.891: INFO: Pod "downwardapi-volume-4171469e-5042-4c12-a277-b35794c85316": Phase="Pending", Reason="", readiness=false. Elapsed: 3.476516ms May 26 21:45:40.894: INFO: Pod "downwardapi-volume-4171469e-5042-4c12-a277-b35794c85316": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007022155s May 26 21:45:42.899: INFO: Pod "downwardapi-volume-4171469e-5042-4c12-a277-b35794c85316": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011268496s STEP: Saw pod success May 26 21:45:42.899: INFO: Pod "downwardapi-volume-4171469e-5042-4c12-a277-b35794c85316" satisfied condition "success or failure" May 26 21:45:42.902: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-4171469e-5042-4c12-a277-b35794c85316 container client-container: STEP: delete the pod May 26 21:45:42.923: INFO: Waiting for pod downwardapi-volume-4171469e-5042-4c12-a277-b35794c85316 to disappear May 26 21:45:42.940: INFO: Pod downwardapi-volume-4171469e-5042-4c12-a277-b35794c85316 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:45:42.940: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7836" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":132,"skipped":2130,"failed":0} SSSS ------------------------------ [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:45:42.949: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:45:43.065: INFO: Creating simple daemon set daemon-set STEP: Check that daemon pods launch on every node of the cluster. May 26 21:45:43.084: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:43.090: INFO: Number of nodes with available pods: 0 May 26 21:45:43.090: INFO: Node jerma-worker is running more than one daemon pod May 26 21:45:44.103: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:44.106: INFO: Number of nodes with available pods: 0 May 26 21:45:44.106: INFO: Node jerma-worker is running more than one daemon pod May 26 21:45:45.095: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:45.099: INFO: Number of nodes with available pods: 0 May 26 21:45:45.099: INFO: Node jerma-worker is running more than one daemon pod May 26 21:45:46.094: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:46.097: INFO: Number of nodes with available pods: 0 May 26 21:45:46.097: INFO: Node jerma-worker is running more than one daemon pod May 26 21:45:47.096: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:47.100: INFO: Number of nodes with available pods: 0 May 26 21:45:47.100: INFO: Node jerma-worker is running more than one daemon pod May 26 21:45:48.113: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:48.115: INFO: Number of nodes with available pods: 2 May 26 21:45:48.115: INFO: Number of running nodes: 2, number of available pods: 2 STEP: Update daemon pods image. STEP: Check that daemon pods images are updated. May 26 21:45:48.209: INFO: Wrong image for pod: daemon-set-6dt9j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 26 21:45:48.209: INFO: Wrong image for pod: daemon-set-rslzh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 26 21:45:48.233: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:49.244: INFO: Wrong image for pod: daemon-set-6dt9j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 26 21:45:49.244: INFO: Wrong image for pod: daemon-set-rslzh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 26 21:45:49.258: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:50.239: INFO: Wrong image for pod: daemon-set-6dt9j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 26 21:45:50.239: INFO: Wrong image for pod: daemon-set-rslzh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 26 21:45:50.244: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:51.237: INFO: Wrong image for pod: daemon-set-6dt9j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 26 21:45:51.237: INFO: Wrong image for pod: daemon-set-rslzh. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 26 21:45:51.237: INFO: Pod daemon-set-rslzh is not available May 26 21:45:51.241: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:52.238: INFO: Wrong image for pod: daemon-set-6dt9j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 26 21:45:52.238: INFO: Pod daemon-set-gvjlh is not available May 26 21:45:52.242: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:53.238: INFO: Wrong image for pod: daemon-set-6dt9j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 26 21:45:53.238: INFO: Pod daemon-set-gvjlh is not available May 26 21:45:53.242: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:54.238: INFO: Wrong image for pod: daemon-set-6dt9j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 26 21:45:54.238: INFO: Pod daemon-set-gvjlh is not available May 26 21:45:54.242: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:55.238: INFO: Wrong image for pod: daemon-set-6dt9j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 26 21:45:55.242: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:56.258: INFO: Wrong image for pod: daemon-set-6dt9j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 26 21:45:56.258: INFO: Pod daemon-set-6dt9j is not available May 26 21:45:56.262: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:57.238: INFO: Wrong image for pod: daemon-set-6dt9j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 26 21:45:57.238: INFO: Pod daemon-set-6dt9j is not available May 26 21:45:57.242: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:58.238: INFO: Wrong image for pod: daemon-set-6dt9j. Expected: gcr.io/kubernetes-e2e-test-images/agnhost:2.8, got: docker.io/library/httpd:2.4.38-alpine. May 26 21:45:58.238: INFO: Pod daemon-set-6dt9j is not available May 26 21:45:58.243: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:45:59.256: INFO: Pod daemon-set-pl75x is not available May 26 21:45:59.277: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:46:00.238: INFO: Pod daemon-set-pl75x is not available May 26 21:46:00.243: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node STEP: Check that daemon pods are still running on every node of the cluster. May 26 21:46:00.247: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:46:00.250: INFO: Number of nodes with available pods: 1 May 26 21:46:00.251: INFO: Node jerma-worker is running more than one daemon pod May 26 21:46:01.868: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:46:01.959: INFO: Number of nodes with available pods: 1 May 26 21:46:01.959: INFO: Node jerma-worker is running more than one daemon pod May 26 21:46:02.359: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:46:02.363: INFO: Number of nodes with available pods: 1 May 26 21:46:02.363: INFO: Node jerma-worker is running more than one daemon pod May 26 21:46:03.256: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:46:03.260: INFO: Number of nodes with available pods: 1 May 26 21:46:03.260: INFO: Node jerma-worker is running more than one daemon pod May 26 21:46:04.255: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 21:46:04.259: INFO: Number of nodes with available pods: 2 May 26 21:46:04.259: INFO: Number of running nodes: 2, number of available pods: 2 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7975, will wait for the garbage collector to delete the pods May 26 21:46:04.336: INFO: Deleting DaemonSet.extensions daemon-set took: 8.586472ms May 26 21:46:04.636: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.267689ms May 26 21:46:19.340: INFO: Number of nodes with available pods: 0 May 26 21:46:19.340: INFO: Number of running nodes: 0, number of available pods: 0 May 26 21:46:19.343: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7975/daemonsets","resourceVersion":"19385103"},"items":null} May 26 21:46:19.345: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7975/pods","resourceVersion":"19385103"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:46:19.356: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7975" for this suite. • [SLOW TEST:36.416 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should update pod when spec was updated and update strategy is RollingUpdate [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should update pod when spec was updated and update strategy is RollingUpdate [Conformance]","total":278,"completed":133,"skipped":2134,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:46:19.365: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-5956 STEP: changing the ExternalName service to type=ClusterIP STEP: creating replication controller externalname-service in namespace services-5956 I0526 21:46:19.511406 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-5956, replica count: 2 I0526 21:46:22.561917 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 21:46:25.562209 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 21:46:25.562: INFO: Creating new exec pod May 26 21:46:30.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5956 execpod6n9zq -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 26 21:46:30.810: INFO: stderr: "I0526 21:46:30.708480 2172 log.go:172] (0xc0009aa000) (0xc000740f00) Create stream\nI0526 21:46:30.708589 2172 log.go:172] (0xc0009aa000) (0xc000740f00) Stream added, broadcasting: 1\nI0526 21:46:30.710863 2172 log.go:172] (0xc0009aa000) Reply frame received for 1\nI0526 21:46:30.710897 2172 log.go:172] (0xc0009aa000) (0xc00051e280) Create stream\nI0526 21:46:30.710911 2172 log.go:172] (0xc0009aa000) (0xc00051e280) Stream added, broadcasting: 3\nI0526 21:46:30.711862 2172 log.go:172] (0xc0009aa000) Reply frame received for 3\nI0526 21:46:30.711892 2172 log.go:172] (0xc0009aa000) (0xc000740fa0) Create stream\nI0526 21:46:30.711901 2172 log.go:172] (0xc0009aa000) (0xc000740fa0) Stream added, broadcasting: 5\nI0526 21:46:30.712836 2172 log.go:172] (0xc0009aa000) Reply frame received for 5\nI0526 21:46:30.799959 2172 log.go:172] (0xc0009aa000) Data frame received for 5\nI0526 21:46:30.799989 2172 log.go:172] (0xc000740fa0) (5) Data frame handling\nI0526 21:46:30.800010 2172 log.go:172] (0xc000740fa0) (5) Data frame sent\n+ nc -zv -t -w 2 externalname-service 80\nI0526 21:46:30.802590 2172 log.go:172] (0xc0009aa000) Data frame received for 5\nI0526 21:46:30.802637 2172 log.go:172] (0xc000740fa0) (5) Data frame handling\nI0526 21:46:30.802677 2172 log.go:172] (0xc000740fa0) (5) Data frame sent\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0526 21:46:30.803225 2172 log.go:172] (0xc0009aa000) Data frame received for 3\nI0526 21:46:30.803249 2172 log.go:172] (0xc00051e280) (3) Data frame handling\nI0526 21:46:30.803270 2172 log.go:172] (0xc0009aa000) Data frame received for 5\nI0526 21:46:30.803284 2172 log.go:172] (0xc000740fa0) (5) Data frame handling\nI0526 21:46:30.805319 2172 log.go:172] (0xc0009aa000) Data frame received for 1\nI0526 21:46:30.805353 2172 log.go:172] (0xc000740f00) (1) Data frame handling\nI0526 21:46:30.805380 2172 log.go:172] (0xc000740f00) (1) Data frame sent\nI0526 21:46:30.805409 2172 log.go:172] (0xc0009aa000) (0xc000740f00) Stream removed, broadcasting: 1\nI0526 21:46:30.805429 2172 log.go:172] (0xc0009aa000) Go away received\nI0526 21:46:30.805901 2172 log.go:172] (0xc0009aa000) (0xc000740f00) Stream removed, broadcasting: 1\nI0526 21:46:30.805936 2172 log.go:172] (0xc0009aa000) (0xc00051e280) Stream removed, broadcasting: 3\nI0526 21:46:30.805950 2172 log.go:172] (0xc0009aa000) (0xc000740fa0) Stream removed, broadcasting: 5\n" May 26 21:46:30.810: INFO: stdout: "" May 26 21:46:30.810: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-5956 execpod6n9zq -- /bin/sh -x -c nc -zv -t -w 2 10.105.191.215 80' May 26 21:46:31.041: INFO: stderr: "I0526 21:46:30.941791 2191 log.go:172] (0xc0009ac160) (0xc000767680) Create stream\nI0526 21:46:30.941841 2191 log.go:172] (0xc0009ac160) (0xc000767680) Stream added, broadcasting: 1\nI0526 21:46:30.944344 2191 log.go:172] (0xc0009ac160) Reply frame received for 1\nI0526 21:46:30.944385 2191 log.go:172] (0xc0009ac160) (0xc000920000) Create stream\nI0526 21:46:30.944395 2191 log.go:172] (0xc0009ac160) (0xc000920000) Stream added, broadcasting: 3\nI0526 21:46:30.945593 2191 log.go:172] (0xc0009ac160) Reply frame received for 3\nI0526 21:46:30.945640 2191 log.go:172] (0xc0009ac160) (0xc0008fe0a0) Create stream\nI0526 21:46:30.945658 2191 log.go:172] (0xc0009ac160) (0xc0008fe0a0) Stream added, broadcasting: 5\nI0526 21:46:30.946664 2191 log.go:172] (0xc0009ac160) Reply frame received for 5\nI0526 21:46:31.033969 2191 log.go:172] (0xc0009ac160) Data frame received for 3\nI0526 21:46:31.034007 2191 log.go:172] (0xc000920000) (3) Data frame handling\nI0526 21:46:31.036105 2191 log.go:172] (0xc0009ac160) Data frame received for 5\nI0526 21:46:31.036138 2191 log.go:172] (0xc0008fe0a0) (5) Data frame handling\nI0526 21:46:31.036152 2191 log.go:172] (0xc0008fe0a0) (5) Data frame sent\nI0526 21:46:31.036162 2191 log.go:172] (0xc0009ac160) Data frame received for 5\nI0526 21:46:31.036180 2191 log.go:172] (0xc0008fe0a0) (5) Data frame handling\n+ nc -zv -t -w 2 10.105.191.215 80\nConnection to 10.105.191.215 80 port [tcp/http] succeeded!\nI0526 21:46:31.036215 2191 log.go:172] (0xc0009ac160) Data frame received for 1\nI0526 21:46:31.036228 2191 log.go:172] (0xc000767680) (1) Data frame handling\nI0526 21:46:31.036238 2191 log.go:172] (0xc000767680) (1) Data frame sent\nI0526 21:46:31.036251 2191 log.go:172] (0xc0009ac160) (0xc000767680) Stream removed, broadcasting: 1\nI0526 21:46:31.036266 2191 log.go:172] (0xc0009ac160) Go away received\nI0526 21:46:31.036686 2191 log.go:172] (0xc0009ac160) (0xc000767680) Stream removed, broadcasting: 1\nI0526 21:46:31.036713 2191 log.go:172] (0xc0009ac160) (0xc000920000) Stream removed, broadcasting: 3\nI0526 21:46:31.036722 2191 log.go:172] (0xc0009ac160) (0xc0008fe0a0) Stream removed, broadcasting: 5\n" May 26 21:46:31.041: INFO: stdout: "" May 26 21:46:31.041: INFO: Cleaning up the ExternalName to ClusterIP test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:46:31.145: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-5956" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.791 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to ClusterIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to ClusterIP [Conformance]","total":278,"completed":134,"skipped":2163,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:46:31.157: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should support retrieving logs from the container over websockets [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:46:31.223: INFO: >>> kubeConfig: /root/.kube/config STEP: creating the pod STEP: submitting the pod to kubernetes [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:46:35.298: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-667" for this suite. •{"msg":"PASSED [k8s.io] Pods should support retrieving logs from the container over websockets [NodeConformance] [Conformance]","total":278,"completed":135,"skipped":2221,"failed":0} SS ------------------------------ [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:46:35.307: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on node default medium May 26 21:46:35.390: INFO: Waiting up to 5m0s for pod "pod-61f8e810-741e-4a62-8c39-fe53be9b3185" in namespace "emptydir-1726" to be "success or failure" May 26 21:46:35.402: INFO: Pod "pod-61f8e810-741e-4a62-8c39-fe53be9b3185": Phase="Pending", Reason="", readiness=false. Elapsed: 11.757765ms May 26 21:46:37.498: INFO: Pod "pod-61f8e810-741e-4a62-8c39-fe53be9b3185": Phase="Pending", Reason="", readiness=false. Elapsed: 2.107876162s May 26 21:46:39.502: INFO: Pod "pod-61f8e810-741e-4a62-8c39-fe53be9b3185": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.111561437s STEP: Saw pod success May 26 21:46:39.502: INFO: Pod "pod-61f8e810-741e-4a62-8c39-fe53be9b3185" satisfied condition "success or failure" May 26 21:46:39.505: INFO: Trying to get logs from node jerma-worker pod pod-61f8e810-741e-4a62-8c39-fe53be9b3185 container test-container: STEP: delete the pod May 26 21:46:39.536: INFO: Waiting for pod pod-61f8e810-741e-4a62-8c39-fe53be9b3185 to disappear May 26 21:46:39.540: INFO: Pod pod-61f8e810-741e-4a62-8c39-fe53be9b3185 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:46:39.540: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1726" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on default medium should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":136,"skipped":2223,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:46:39.546: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-6b2417ff-35be-4bfc-bb0c-a1f1fa0f85fe STEP: Creating configMap with name cm-test-opt-upd-b027a7fc-0b65-465d-a62b-7429515748d2 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-6b2417ff-35be-4bfc-bb0c-a1f1fa0f85fe STEP: Updating configmap cm-test-opt-upd-b027a7fc-0b65-465d-a62b-7429515748d2 STEP: Creating configMap with name cm-test-opt-create-f773e1e7-8353-4637-aee7-e01c6e7c6c90 STEP: waiting to observe update in volume [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:46:49.871: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7604" for this suite. • [SLOW TEST:10.331 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":137,"skipped":2234,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:46:49.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group and version but different kinds (two CRDs) show up in OpenAPI documentation May 26 21:46:49.958: INFO: >>> kubeConfig: /root/.kube/config May 26 21:46:52.435: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:47:02.878: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-3911" for this suite. • [SLOW TEST:13.006 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group and version but different kinds [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group and version but different kinds [Conformance]","total":278,"completed":138,"skipped":2249,"failed":0} SSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:47:02.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:47:02.986: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:47:09.151: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-3503" for this suite. • [SLOW TEST:6.276 seconds] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Simple CustomResourceDefinition /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/custom_resource_definition.go:47 listing custom resource definition objects works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition listing custom resource definition objects works [Conformance]","total":278,"completed":139,"skipped":2253,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:47:09.160: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on tmpfs May 26 21:47:09.225: INFO: Waiting up to 5m0s for pod "pod-83dbe9ca-1b24-4606-886c-fdda2c272abc" in namespace "emptydir-5712" to be "success or failure" May 26 21:47:09.275: INFO: Pod "pod-83dbe9ca-1b24-4606-886c-fdda2c272abc": Phase="Pending", Reason="", readiness=false. Elapsed: 50.208333ms May 26 21:47:11.279: INFO: Pod "pod-83dbe9ca-1b24-4606-886c-fdda2c272abc": Phase="Pending", Reason="", readiness=false. Elapsed: 2.05403427s May 26 21:47:13.284: INFO: Pod "pod-83dbe9ca-1b24-4606-886c-fdda2c272abc": Phase="Pending", Reason="", readiness=false. Elapsed: 4.05902396s May 26 21:47:15.288: INFO: Pod "pod-83dbe9ca-1b24-4606-886c-fdda2c272abc": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.063290879s STEP: Saw pod success May 26 21:47:15.288: INFO: Pod "pod-83dbe9ca-1b24-4606-886c-fdda2c272abc" satisfied condition "success or failure" May 26 21:47:15.291: INFO: Trying to get logs from node jerma-worker pod pod-83dbe9ca-1b24-4606-886c-fdda2c272abc container test-container: STEP: delete the pod May 26 21:47:15.325: INFO: Waiting for pod pod-83dbe9ca-1b24-4606-886c-fdda2c272abc to disappear May 26 21:47:15.343: INFO: Pod pod-83dbe9ca-1b24-4606-886c-fdda2c272abc no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:47:15.343: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5712" for this suite. • [SLOW TEST:6.191 seconds] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/empty_dir.go:40 should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":140,"skipped":2262,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:47:15.352: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a service in the namespace STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there is no service in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:47:21.632: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-3102" for this suite. STEP: Destroying namespace "nsdeletetest-677" for this suite. May 26 21:47:21.644: INFO: Namespace nsdeletetest-677 was already deleted STEP: Destroying namespace "nsdeletetest-8964" for this suite. • [SLOW TEST:6.296 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all services are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all services are removed when a namespace is deleted [Conformance]","total":278,"completed":141,"skipped":2274,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:47:21.648: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir-wrapper STEP: Waiting for a default service account to be provisioned in namespace [It] should not conflict [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Cleaning up the secret STEP: Cleaning up the configmap STEP: Cleaning up the pod [AfterEach] [sig-storage] EmptyDir wrapper volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:47:25.788: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-wrapper-6825" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir wrapper volumes should not conflict [Conformance]","total":278,"completed":142,"skipped":2285,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:47:25.845: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] custom resource defaulting for requests and from storage works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:47:26.044: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:47:27.422: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-698" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] custom resource defaulting for requests and from storage works [Conformance]","total":278,"completed":143,"skipped":2316,"failed":0} SS ------------------------------ [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:47:27.469: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:47:27.535: INFO: Creating replica set "test-rolling-update-controller" (going to be adopted) May 26 21:47:27.553: INFO: Pod name sample-pod: Found 0 pods out of 1 May 26 21:47:32.557: INFO: Pod name sample-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 26 21:47:32.557: INFO: Creating deployment "test-rolling-update-deployment" May 26 21:47:32.561: INFO: Ensuring deployment "test-rolling-update-deployment" gets the next revision from the one the adopted replica set "test-rolling-update-controller" has May 26 21:47:32.570: INFO: new replicaset for deployment "test-rolling-update-deployment" is yet to be created May 26 21:47:34.578: INFO: Ensuring status for deployment "test-rolling-update-deployment" is the expected May 26 21:47:34.582: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126452, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126452, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126452, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126452, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rolling-update-deployment-67cf4f6444\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 21:47:36.585: INFO: Ensuring deployment "test-rolling-update-deployment" has one old replica set (the one it adopted) [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 26 21:47:36.593: INFO: Deployment "test-rolling-update-deployment": &Deployment{ObjectMeta:{test-rolling-update-deployment deployment-7958 /apis/apps/v1/namespaces/deployment-7958/deployments/test-rolling-update-deployment 5268d0cd-e10a-4046-b73c-6d3ec8990315 19385769 1 2020-05-26 21:47:32 +0000 UTC map[name:sample-pod] map[deployment.kubernetes.io/revision:3546343826724305833] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00556d668 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:1,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-26 21:47:32 +0000 UTC,LastTransitionTime:2020-05-26 21:47:32 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rolling-update-deployment-67cf4f6444" has successfully progressed.,LastUpdateTime:2020-05-26 21:47:36 +0000 UTC,LastTransitionTime:2020-05-26 21:47:32 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 26 21:47:36.595: INFO: New ReplicaSet "test-rolling-update-deployment-67cf4f6444" of Deployment "test-rolling-update-deployment": &ReplicaSet{ObjectMeta:{test-rolling-update-deployment-67cf4f6444 deployment-7958 /apis/apps/v1/namespaces/deployment-7958/replicasets/test-rolling-update-deployment-67cf4f6444 8c25278f-8947-4365-9899-bd55aebf498b 19385758 1 2020-05-26 21:47:32 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305833] [{apps/v1 Deployment test-rolling-update-deployment 5268d0cd-e10a-4046-b73c-6d3ec8990315 0xc00556dc37 0xc00556dc38}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod-template-hash: 67cf4f6444,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc00556dcd8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 26 21:47:36.595: INFO: All old ReplicaSets of Deployment "test-rolling-update-deployment": May 26 21:47:36.596: INFO: &ReplicaSet{ObjectMeta:{test-rolling-update-controller deployment-7958 /apis/apps/v1/namespaces/deployment-7958/replicasets/test-rolling-update-controller 7e529b6e-2d22-4e67-8fc9-d56a2baa8d53 19385767 2 2020-05-26 21:47:27 +0000 UTC map[name:sample-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:3546343826724305832] [{apps/v1 Deployment test-rolling-update-deployment 5268d0cd-e10a-4046-b73c-6d3ec8990315 0xc00556daf7 0xc00556daf8}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: sample-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:sample-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc00556db78 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 26 21:47:36.598: INFO: Pod "test-rolling-update-deployment-67cf4f6444-kv8ph" is available: &Pod{ObjectMeta:{test-rolling-update-deployment-67cf4f6444-kv8ph test-rolling-update-deployment-67cf4f6444- deployment-7958 /api/v1/namespaces/deployment-7958/pods/test-rolling-update-deployment-67cf4f6444-kv8ph 0e22ccc8-8720-4903-bed3-ac0c589c31d3 19385757 0 2020-05-26 21:47:32 +0000 UTC map[name:sample-pod pod-template-hash:67cf4f6444] map[] [{apps/v1 ReplicaSet test-rolling-update-deployment-67cf4f6444 8c25278f-8947-4365-9899-bd55aebf498b 0xc0055a9147 0xc0055a9148}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-nkwxz,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-nkwxz,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-nkwxz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:47:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:47:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:47:36 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 21:47:32 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.170,StartTime:2020-05-26 21:47:32 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 21:47:35 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://28aefa2b2521c5dc6ec4bd16dedcf9e0680b9ebba3f243926a2a70cfb68edb21,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.170,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:47:36.598: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-7958" for this suite. • [SLOW TEST:9.137 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 RollingUpdateDeployment should delete old pods and create new ones [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment RollingUpdateDeployment should delete old pods and create new ones [Conformance]","total":278,"completed":144,"skipped":2318,"failed":0} S ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:47:36.606: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 26 21:47:36.930: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 26 21:47:36.940: INFO: Waiting for terminating namespaces to be deleted... May 26 21:47:36.943: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 26 21:47:36.948: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 21:47:36.948: INFO: Container kindnet-cni ready: true, restart count 2 May 26 21:47:36.948: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 21:47:36.948: INFO: Container kube-proxy ready: true, restart count 0 May 26 21:47:36.948: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 26 21:47:36.974: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 21:47:36.974: INFO: Container kube-proxy ready: true, restart count 0 May 26 21:47:36.974: INFO: test-rolling-update-deployment-67cf4f6444-kv8ph from deployment-7958 started at 2020-05-26 21:47:32 +0000 UTC (1 container statuses recorded) May 26 21:47:36.974: INFO: Container agnhost ready: true, restart count 0 May 26 21:47:36.974: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 26 21:47:36.974: INFO: Container kube-hunter ready: false, restart count 0 May 26 21:47:36.974: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 26 21:47:36.974: INFO: Container kube-bench ready: false, restart count 0 May 26 21:47:36.974: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 21:47:36.974: INFO: Container kindnet-cni ready: true, restart count 2 [It] validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: verifying the node has the label node jerma-worker STEP: verifying the node has the label node jerma-worker2 May 26 21:47:37.253: INFO: Pod test-rolling-update-deployment-67cf4f6444-kv8ph requesting resource cpu=0m on Node jerma-worker2 May 26 21:47:37.253: INFO: Pod kindnet-c5svj requesting resource cpu=100m on Node jerma-worker May 26 21:47:37.253: INFO: Pod kindnet-zk6sq requesting resource cpu=100m on Node jerma-worker2 May 26 21:47:37.253: INFO: Pod kube-proxy-44mlz requesting resource cpu=0m on Node jerma-worker May 26 21:47:37.253: INFO: Pod kube-proxy-75q42 requesting resource cpu=0m on Node jerma-worker2 STEP: Starting Pods to consume most of the cluster CPU. May 26 21:47:37.253: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker May 26 21:47:37.258: INFO: Creating a pod which consumes cpu=11130m on Node jerma-worker2 STEP: Creating another pod that requires unavailable amount of CPU. STEP: Considering event: Type = [Normal], Name = [filler-pod-2f9560fc-c825-43ec-a018-c221b15182fb.1612b24be0eb84a3], Reason = [Scheduled], Message = [Successfully assigned sched-pred-141/filler-pod-2f9560fc-c825-43ec-a018-c221b15182fb to jerma-worker2] STEP: Considering event: Type = [Normal], Name = [filler-pod-2f9560fc-c825-43ec-a018-c221b15182fb.1612b24c305810f3], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-2f9560fc-c825-43ec-a018-c221b15182fb.1612b24c8aa23cb3], Reason = [Created], Message = [Created container filler-pod-2f9560fc-c825-43ec-a018-c221b15182fb] STEP: Considering event: Type = [Normal], Name = [filler-pod-2f9560fc-c825-43ec-a018-c221b15182fb.1612b24c9d4754e5], Reason = [Started], Message = [Started container filler-pod-2f9560fc-c825-43ec-a018-c221b15182fb] STEP: Considering event: Type = [Normal], Name = [filler-pod-f23fef63-5052-4325-9457-98aa5d0e3701.1612b24bdfc46117], Reason = [Scheduled], Message = [Successfully assigned sched-pred-141/filler-pod-f23fef63-5052-4325-9457-98aa5d0e3701 to jerma-worker] STEP: Considering event: Type = [Normal], Name = [filler-pod-f23fef63-5052-4325-9457-98aa5d0e3701.1612b24c5f3cc14c], Reason = [Pulled], Message = [Container image "k8s.gcr.io/pause:3.1" already present on machine] STEP: Considering event: Type = [Normal], Name = [filler-pod-f23fef63-5052-4325-9457-98aa5d0e3701.1612b24c956140f3], Reason = [Created], Message = [Created container filler-pod-f23fef63-5052-4325-9457-98aa5d0e3701] STEP: Considering event: Type = [Normal], Name = [filler-pod-f23fef63-5052-4325-9457-98aa5d0e3701.1612b24ca57ad5bd], Reason = [Started], Message = [Started container filler-pod-f23fef63-5052-4325-9457-98aa5d0e3701] STEP: Considering event: Type = [Warning], Name = [additional-pod.1612b24cd5e63de1], Reason = [FailedScheduling], Message = [0/3 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 2 Insufficient cpu.] STEP: removing the label node off the node jerma-worker STEP: verifying the node doesn't have the label node STEP: removing the label node off the node jerma-worker2 STEP: verifying the node doesn't have the label node [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:47:42.900: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-141" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:6.318 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates resource limits of pods that are allowed to run [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates resource limits of pods that are allowed to run [Conformance]","total":278,"completed":145,"skipped":2319,"failed":0} SSSS ------------------------------ [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:47:42.925: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 26 21:47:43.005: INFO: Waiting up to 5m0s for pod "downward-api-28786372-6ad0-46f1-b57c-5015732eec3a" in namespace "downward-api-7507" to be "success or failure" May 26 21:47:43.034: INFO: Pod "downward-api-28786372-6ad0-46f1-b57c-5015732eec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 28.162294ms May 26 21:47:45.038: INFO: Pod "downward-api-28786372-6ad0-46f1-b57c-5015732eec3a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032126801s May 26 21:47:47.043: INFO: Pod "downward-api-28786372-6ad0-46f1-b57c-5015732eec3a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.037139568s STEP: Saw pod success May 26 21:47:47.043: INFO: Pod "downward-api-28786372-6ad0-46f1-b57c-5015732eec3a" satisfied condition "success or failure" May 26 21:47:47.046: INFO: Trying to get logs from node jerma-worker2 pod downward-api-28786372-6ad0-46f1-b57c-5015732eec3a container dapi-container: STEP: delete the pod May 26 21:47:47.066: INFO: Waiting for pod downward-api-28786372-6ad0-46f1-b57c-5015732eec3a to disappear May 26 21:47:47.107: INFO: Pod downward-api-28786372-6ad0-46f1-b57c-5015732eec3a no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:47:47.107: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7507" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod name, namespace and IP address as env vars [NodeConformance] [Conformance]","total":278,"completed":146,"skipped":2323,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:47:47.115: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 26 21:47:47.207: INFO: PodSpec: initContainers in spec.initContainers [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:47:55.578: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-6286" for this suite. • [SLOW TEST:8.480 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should invoke init containers on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should invoke init containers on a RestartAlways pod [Conformance]","total":278,"completed":147,"skipped":2330,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:47:55.596: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 21:47:56.123: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 21:47:58.134: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126476, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126476, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126476, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126476, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 21:48:00.139: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126476, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126476, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126476, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126476, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 21:48:03.176: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the mutating configmap webhook via the AdmissionRegistration API STEP: create a configmap that should be updated by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:48:03.296: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-2242" for this suite. STEP: Destroying namespace "webhook-2242-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.867 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate configmap [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate configmap [Conformance]","total":278,"completed":148,"skipped":2358,"failed":0} SSSSSSSS ------------------------------ [sig-apps] ReplicationController should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:48:03.463: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption is created STEP: When a replication controller with a matching selector is created STEP: Then the orphan pod is adopted [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:48:08.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-5634" for this suite. • [SLOW TEST:5.132 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should adopt matching pods on creation [Conformance]","total":278,"completed":149,"skipped":2366,"failed":0} SSSSSSSSSS ------------------------------ [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:48:08.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubelet-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/kubelet.go:37 [It] should print the output to logs [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Kubelet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:48:12.701: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubelet-test-80" for this suite. •{"msg":"PASSED [k8s.io] Kubelet when scheduling a busybox command in a pod should print the output to logs [NodeConformance] [Conformance]","total":278,"completed":150,"skipped":2376,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:48:12.709: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 26 21:48:12.870: INFO: Waiting up to 5m0s for pod "pod-25d37af4-79cd-4f64-a0ee-eb0e341e9f3e" in namespace "emptydir-515" to be "success or failure" May 26 21:48:12.880: INFO: Pod "pod-25d37af4-79cd-4f64-a0ee-eb0e341e9f3e": Phase="Pending", Reason="", readiness=false. Elapsed: 9.804225ms May 26 21:48:14.883: INFO: Pod "pod-25d37af4-79cd-4f64-a0ee-eb0e341e9f3e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.012745658s May 26 21:48:16.911: INFO: Pod "pod-25d37af4-79cd-4f64-a0ee-eb0e341e9f3e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.040153255s STEP: Saw pod success May 26 21:48:16.911: INFO: Pod "pod-25d37af4-79cd-4f64-a0ee-eb0e341e9f3e" satisfied condition "success or failure" May 26 21:48:16.914: INFO: Trying to get logs from node jerma-worker2 pod pod-25d37af4-79cd-4f64-a0ee-eb0e341e9f3e container test-container: STEP: delete the pod May 26 21:48:16.959: INFO: Waiting for pod pod-25d37af4-79cd-4f64-a0ee-eb0e341e9f3e to disappear May 26 21:48:16.975: INFO: Pod pod-25d37af4-79cd-4f64-a0ee-eb0e341e9f3e no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:48:16.975: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-515" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":151,"skipped":2414,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:48:16.983: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:48:17.056: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 26 21:48:19.982: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5684 create -f -' May 26 21:48:24.254: INFO: stderr: "" May 26 21:48:24.254: INFO: stdout: "e2e-test-crd-publish-openapi-3236-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 26 21:48:24.255: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5684 delete e2e-test-crd-publish-openapi-3236-crds test-cr' May 26 21:48:24.365: INFO: stderr: "" May 26 21:48:24.365: INFO: stdout: "e2e-test-crd-publish-openapi-3236-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" May 26 21:48:24.365: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5684 apply -f -' May 26 21:48:24.651: INFO: stderr: "" May 26 21:48:24.651: INFO: stdout: "e2e-test-crd-publish-openapi-3236-crd.crd-publish-openapi-test-empty.example.com/test-cr created\n" May 26 21:48:24.651: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-5684 delete e2e-test-crd-publish-openapi-3236-crds test-cr' May 26 21:48:24.767: INFO: stderr: "" May 26 21:48:24.767: INFO: stdout: "e2e-test-crd-publish-openapi-3236-crd.crd-publish-openapi-test-empty.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR without validation schema May 26 21:48:24.767: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-3236-crds' May 26 21:48:24.998: INFO: stderr: "" May 26 21:48:24.998: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-3236-crd\nVERSION: crd-publish-openapi-test-empty.example.com/v1\n\nDESCRIPTION:\n \n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:48:26.894: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5684" for this suite. • [SLOW TEST:9.919 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD without validation schema [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD without validation schema [Conformance]","total":278,"completed":152,"skipped":2421,"failed":0} SS ------------------------------ [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:48:26.902: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename aggregator STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:76 May 26 21:48:26.990: INFO: >>> kubeConfig: /root/.kube/config [It] Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the sample API server. May 26 21:48:28.336: INFO: deployment "sample-apiserver-deployment" doesn't have the required revision set May 26 21:48:30.539: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126508, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126508, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126508, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126508, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 21:48:32.677: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126508, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126508, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126508, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126508, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 21:48:34.544: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126508, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126508, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126508, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126508, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-apiserver-deployment-867766ffc6\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 21:48:37.079: INFO: Waited 525.878405ms for the sample-apiserver to be ready to handle requests. [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/aggregator.go:67 [AfterEach] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:48:38.146: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "aggregator-7039" for this suite. • [SLOW TEST:11.321 seconds] [sig-api-machinery] Aggregator /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Aggregator Should be able to support the 1.10 Sample API Server using the current Aggregator [Conformance]","total":278,"completed":153,"skipped":2423,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:48:38.224: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-0234a9c3-efb0-4566-a6cf-c9f6448ee957 STEP: Creating a pod to test consume secrets May 26 21:48:38.627: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-988ed08a-68f3-4cca-b430-1264d98d85c9" in namespace "projected-8932" to be "success or failure" May 26 21:48:38.630: INFO: Pod "pod-projected-secrets-988ed08a-68f3-4cca-b430-1264d98d85c9": Phase="Pending", Reason="", readiness=false. Elapsed: 3.12031ms May 26 21:48:40.635: INFO: Pod "pod-projected-secrets-988ed08a-68f3-4cca-b430-1264d98d85c9": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007520075s May 26 21:48:42.638: INFO: Pod "pod-projected-secrets-988ed08a-68f3-4cca-b430-1264d98d85c9": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.010725252s STEP: Saw pod success May 26 21:48:42.638: INFO: Pod "pod-projected-secrets-988ed08a-68f3-4cca-b430-1264d98d85c9" satisfied condition "success or failure" May 26 21:48:42.679: INFO: Trying to get logs from node jerma-worker pod pod-projected-secrets-988ed08a-68f3-4cca-b430-1264d98d85c9 container projected-secret-volume-test: STEP: delete the pod May 26 21:48:42.734: INFO: Waiting for pod pod-projected-secrets-988ed08a-68f3-4cca-b430-1264d98d85c9 to disappear May 26 21:48:42.773: INFO: Pod pod-projected-secrets-988ed08a-68f3-4cca-b430-1264d98d85c9 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:48:42.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8932" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":154,"skipped":2445,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:48:42.781: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating replication controller my-hostname-basic-3cd15771-d616-4fa7-a24a-874597b2a117 May 26 21:48:42.956: INFO: Pod name my-hostname-basic-3cd15771-d616-4fa7-a24a-874597b2a117: Found 0 pods out of 1 May 26 21:48:47.960: INFO: Pod name my-hostname-basic-3cd15771-d616-4fa7-a24a-874597b2a117: Found 1 pods out of 1 May 26 21:48:47.960: INFO: Ensuring all pods for ReplicationController "my-hostname-basic-3cd15771-d616-4fa7-a24a-874597b2a117" are running May 26 21:48:47.962: INFO: Pod "my-hostname-basic-3cd15771-d616-4fa7-a24a-874597b2a117-rhj8d" is running (conditions: [{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 21:48:43 +0000 UTC Reason: Message:} {Type:Ready Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 21:48:45 +0000 UTC Reason: Message:} {Type:ContainersReady Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 21:48:45 +0000 UTC Reason: Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2020-05-26 21:48:42 +0000 UTC Reason: Message:}]) May 26 21:48:47.962: INFO: Trying to dial the pod May 26 21:48:52.974: INFO: Controller my-hostname-basic-3cd15771-d616-4fa7-a24a-874597b2a117: Got expected result from replica 1 [my-hostname-basic-3cd15771-d616-4fa7-a24a-874597b2a117-rhj8d]: "my-hostname-basic-3cd15771-d616-4fa7-a24a-874597b2a117-rhj8d", 1 of 1 required successes so far [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:48:52.974: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3926" for this suite. • [SLOW TEST:10.201 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should serve a basic image on each replica with a public image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should serve a basic image on each replica with a public image [Conformance]","total":278,"completed":155,"skipped":2485,"failed":0} SSS ------------------------------ [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:48:52.982: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should receive events on concurrent watches in same order [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting a background goroutine to produce watch events STEP: creating watches starting from each resource version of the events produced and verifying they all receive resource versions in the same order [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:48:57.239: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-2597" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should receive events on concurrent watches in same order [Conformance]","total":278,"completed":156,"skipped":2488,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:48:57.346: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:48:57.478: INFO: pod1.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod3", UID:"090fab0b-d076-41b4-a6f5-ffe0da9266f0", Controller:(*bool)(0xc002cf9892), BlockOwnerDeletion:(*bool)(0xc002cf9893)}} May 26 21:48:57.493: INFO: pod2.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod1", UID:"2aee08cc-f048-40b3-aee8-4183546227ed", Controller:(*bool)(0xc0055cac02), BlockOwnerDeletion:(*bool)(0xc0055cac03)}} May 26 21:48:57.499: INFO: pod3.ObjectMeta.OwnerReferences=[]v1.OwnerReference{v1.OwnerReference{APIVersion:"v1", Kind:"Pod", Name:"pod2", UID:"5dac7aa1-0fec-495b-8811-c28578305ae8", Controller:(*bool)(0xc002abe84a), BlockOwnerDeletion:(*bool)(0xc002abe84b)}} [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:49:02.565: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-2327" for this suite. • [SLOW TEST:5.226 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be blocked by dependency circle [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not be blocked by dependency circle [Conformance]","total":278,"completed":157,"skipped":2559,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:49:02.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 21:49:03.157: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 21:49:05.169: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126543, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126543, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126543, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126543, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 21:49:07.173: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126543, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126543, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126543, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126543, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 21:49:10.241: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting timeout (1s) shorter than webhook latency (5s) STEP: Registering slow webhook via the AdmissionRegistration API STEP: Request fails when timeout (1s) is shorter than slow webhook latency (5s) STEP: Having no error when timeout is shorter than webhook latency and failure policy is ignore STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is longer than webhook latency STEP: Registering slow webhook via the AdmissionRegistration API STEP: Having no error when timeout is empty (defaulted to 10s in v1) STEP: Registering slow webhook via the AdmissionRegistration API [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:49:22.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8941" for this suite. STEP: Destroying namespace "webhook-8941-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:19.956 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should honor timeout [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should honor timeout [Conformance]","total":278,"completed":158,"skipped":2566,"failed":0} SSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:49:22.529: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2148 [It] Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Looking for a node to schedule stateful set and pod STEP: Creating pod with conflicting port in namespace statefulset-2148 STEP: Creating statefulset with conflicting port in namespace statefulset-2148 STEP: Waiting until pod test-pod will start running in namespace statefulset-2148 STEP: Waiting until stateful pod ss-0 will be recreated and deleted at least once in namespace statefulset-2148 May 26 21:49:28.758: INFO: Observed stateful pod in namespace: statefulset-2148, name: ss-0, uid: 75e1f30a-b7a6-4167-ae40-0807f2935ca0, status phase: Pending. Waiting for statefulset controller to delete. May 26 21:49:29.262: INFO: Observed stateful pod in namespace: statefulset-2148, name: ss-0, uid: 75e1f30a-b7a6-4167-ae40-0807f2935ca0, status phase: Failed. Waiting for statefulset controller to delete. May 26 21:49:29.322: INFO: Observed stateful pod in namespace: statefulset-2148, name: ss-0, uid: 75e1f30a-b7a6-4167-ae40-0807f2935ca0, status phase: Failed. Waiting for statefulset controller to delete. May 26 21:49:29.335: INFO: Observed delete event for stateful pod ss-0 in namespace statefulset-2148 STEP: Removing pod with conflicting port in namespace statefulset-2148 STEP: Waiting when stateful pod ss-0 will be recreated in namespace statefulset-2148 and will be in running state [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 26 21:49:35.417: INFO: Deleting all statefulset in ns statefulset-2148 May 26 21:49:35.420: INFO: Scaling statefulset ss to 0 May 26 21:49:55.444: INFO: Waiting for statefulset status.replicas updated to 0 May 26 21:49:55.447: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:49:55.480: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2148" for this suite. • [SLOW TEST:32.957 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Should recreate evicted statefulset [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Should recreate evicted statefulset [Conformance]","total":278,"completed":159,"skipped":2572,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:49:55.487: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 21:49:55.963: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 21:49:57.972: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126595, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126595, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126596, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126595, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 21:50:01.051: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a webhook that server cannot talk to, with fail closed policy, via the AdmissionRegistration API STEP: create a namespace for the webhook STEP: create a configmap should be unconditionally rejected by the webhook [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:50:01.157: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-3330" for this suite. STEP: Destroying namespace "webhook-3330-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.898 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should unconditionally reject operations on fail closed webhook [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should unconditionally reject operations on fail closed webhook [Conformance]","total":278,"completed":160,"skipped":2604,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:50:01.386: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename e2e-kubelet-etc-hosts STEP: Waiting for a default service account to be provisioned in namespace [It] should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Setting up the test STEP: Creating hostNetwork=false pod STEP: Creating hostNetwork=true pod STEP: Running the test STEP: Verifying /etc/hosts of container is kubelet-managed for pod with hostNetwork=false May 26 21:50:11.518: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2588 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:50:11.518: INFO: >>> kubeConfig: /root/.kube/config I0526 21:50:11.550604 6 log.go:172] (0xc0023ac420) (0xc0026db7c0) Create stream I0526 21:50:11.550629 6 log.go:172] (0xc0023ac420) (0xc0026db7c0) Stream added, broadcasting: 1 I0526 21:50:11.552264 6 log.go:172] (0xc0023ac420) Reply frame received for 1 I0526 21:50:11.552318 6 log.go:172] (0xc0023ac420) (0xc0016eedc0) Create stream I0526 21:50:11.552331 6 log.go:172] (0xc0023ac420) (0xc0016eedc0) Stream added, broadcasting: 3 I0526 21:50:11.553729 6 log.go:172] (0xc0023ac420) Reply frame received for 3 I0526 21:50:11.553784 6 log.go:172] (0xc0023ac420) (0xc0026db860) Create stream I0526 21:50:11.553806 6 log.go:172] (0xc0023ac420) (0xc0026db860) Stream added, broadcasting: 5 I0526 21:50:11.554760 6 log.go:172] (0xc0023ac420) Reply frame received for 5 I0526 21:50:11.650536 6 log.go:172] (0xc0023ac420) Data frame received for 3 I0526 21:50:11.650567 6 log.go:172] (0xc0016eedc0) (3) Data frame handling I0526 21:50:11.650574 6 log.go:172] (0xc0016eedc0) (3) Data frame sent I0526 21:50:11.650579 6 log.go:172] (0xc0023ac420) Data frame received for 3 I0526 21:50:11.650582 6 log.go:172] (0xc0016eedc0) (3) Data frame handling I0526 21:50:11.650602 6 log.go:172] (0xc0023ac420) Data frame received for 5 I0526 21:50:11.650628 6 log.go:172] (0xc0026db860) (5) Data frame handling I0526 21:50:11.652291 6 log.go:172] (0xc0023ac420) Data frame received for 1 I0526 21:50:11.652307 6 log.go:172] (0xc0026db7c0) (1) Data frame handling I0526 21:50:11.652315 6 log.go:172] (0xc0026db7c0) (1) Data frame sent I0526 21:50:11.652324 6 log.go:172] (0xc0023ac420) (0xc0026db7c0) Stream removed, broadcasting: 1 I0526 21:50:11.652369 6 log.go:172] (0xc0023ac420) Go away received I0526 21:50:11.652488 6 log.go:172] (0xc0023ac420) (0xc0026db7c0) Stream removed, broadcasting: 1 I0526 21:50:11.652545 6 log.go:172] (0xc0023ac420) (0xc0016eedc0) Stream removed, broadcasting: 3 I0526 21:50:11.652584 6 log.go:172] (0xc0023ac420) (0xc0026db860) Stream removed, broadcasting: 5 May 26 21:50:11.652: INFO: Exec stderr: "" May 26 21:50:11.652: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2588 PodName:test-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:50:11.652: INFO: >>> kubeConfig: /root/.kube/config I0526 21:50:11.688691 6 log.go:172] (0xc001af4e70) (0xc0019df180) Create stream I0526 21:50:11.688729 6 log.go:172] (0xc001af4e70) (0xc0019df180) Stream added, broadcasting: 1 I0526 21:50:11.690895 6 log.go:172] (0xc001af4e70) Reply frame received for 1 I0526 21:50:11.690943 6 log.go:172] (0xc001af4e70) (0xc002177f40) Create stream I0526 21:50:11.690957 6 log.go:172] (0xc001af4e70) (0xc002177f40) Stream added, broadcasting: 3 I0526 21:50:11.692077 6 log.go:172] (0xc001af4e70) Reply frame received for 3 I0526 21:50:11.692114 6 log.go:172] (0xc001af4e70) (0xc0019df360) Create stream I0526 21:50:11.692130 6 log.go:172] (0xc001af4e70) (0xc0019df360) Stream added, broadcasting: 5 I0526 21:50:11.693668 6 log.go:172] (0xc001af4e70) Reply frame received for 5 I0526 21:50:11.770867 6 log.go:172] (0xc001af4e70) Data frame received for 5 I0526 21:50:11.770908 6 log.go:172] (0xc0019df360) (5) Data frame handling I0526 21:50:11.770962 6 log.go:172] (0xc001af4e70) Data frame received for 3 I0526 21:50:11.771012 6 log.go:172] (0xc002177f40) (3) Data frame handling I0526 21:50:11.771043 6 log.go:172] (0xc002177f40) (3) Data frame sent I0526 21:50:11.771066 6 log.go:172] (0xc001af4e70) Data frame received for 3 I0526 21:50:11.771086 6 log.go:172] (0xc002177f40) (3) Data frame handling I0526 21:50:11.772890 6 log.go:172] (0xc001af4e70) Data frame received for 1 I0526 21:50:11.772916 6 log.go:172] (0xc0019df180) (1) Data frame handling I0526 21:50:11.772928 6 log.go:172] (0xc0019df180) (1) Data frame sent I0526 21:50:11.772941 6 log.go:172] (0xc001af4e70) (0xc0019df180) Stream removed, broadcasting: 1 I0526 21:50:11.773010 6 log.go:172] (0xc001af4e70) Go away received I0526 21:50:11.773029 6 log.go:172] (0xc001af4e70) (0xc0019df180) Stream removed, broadcasting: 1 I0526 21:50:11.773044 6 log.go:172] (0xc001af4e70) (0xc002177f40) Stream removed, broadcasting: 3 I0526 21:50:11.773349 6 log.go:172] (0xc001af4e70) (0xc0019df360) Stream removed, broadcasting: 5 May 26 21:50:11.773: INFO: Exec stderr: "" May 26 21:50:11.773: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2588 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:50:11.773: INFO: >>> kubeConfig: /root/.kube/config I0526 21:50:11.823733 6 log.go:172] (0xc0059482c0) (0xc0016ef040) Create stream I0526 21:50:11.823824 6 log.go:172] (0xc0059482c0) (0xc0016ef040) Stream added, broadcasting: 1 I0526 21:50:11.826810 6 log.go:172] (0xc0059482c0) Reply frame received for 1 I0526 21:50:11.826871 6 log.go:172] (0xc0059482c0) (0xc0016ef180) Create stream I0526 21:50:11.826890 6 log.go:172] (0xc0059482c0) (0xc0016ef180) Stream added, broadcasting: 3 I0526 21:50:11.829705 6 log.go:172] (0xc0059482c0) Reply frame received for 3 I0526 21:50:11.829743 6 log.go:172] (0xc0059482c0) (0xc0016ef220) Create stream I0526 21:50:11.829762 6 log.go:172] (0xc0059482c0) (0xc0016ef220) Stream added, broadcasting: 5 I0526 21:50:11.831164 6 log.go:172] (0xc0059482c0) Reply frame received for 5 I0526 21:50:11.909887 6 log.go:172] (0xc0059482c0) Data frame received for 5 I0526 21:50:11.909936 6 log.go:172] (0xc0059482c0) Data frame received for 3 I0526 21:50:11.909980 6 log.go:172] (0xc0016ef180) (3) Data frame handling I0526 21:50:11.909996 6 log.go:172] (0xc0016ef180) (3) Data frame sent I0526 21:50:11.910010 6 log.go:172] (0xc0059482c0) Data frame received for 3 I0526 21:50:11.910022 6 log.go:172] (0xc0016ef180) (3) Data frame handling I0526 21:50:11.910059 6 log.go:172] (0xc0016ef220) (5) Data frame handling I0526 21:50:11.911501 6 log.go:172] (0xc0059482c0) Data frame received for 1 I0526 21:50:11.911535 6 log.go:172] (0xc0016ef040) (1) Data frame handling I0526 21:50:11.911567 6 log.go:172] (0xc0016ef040) (1) Data frame sent I0526 21:50:11.911591 6 log.go:172] (0xc0059482c0) (0xc0016ef040) Stream removed, broadcasting: 1 I0526 21:50:11.911752 6 log.go:172] (0xc0059482c0) (0xc0016ef040) Stream removed, broadcasting: 1 I0526 21:50:11.911777 6 log.go:172] (0xc0059482c0) (0xc0016ef180) Stream removed, broadcasting: 3 I0526 21:50:11.911791 6 log.go:172] (0xc0059482c0) (0xc0016ef220) Stream removed, broadcasting: 5 May 26 21:50:11.911: INFO: Exec stderr: "" May 26 21:50:11.911: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2588 PodName:test-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:50:11.911: INFO: >>> kubeConfig: /root/.kube/config I0526 21:50:11.913417 6 log.go:172] (0xc0059482c0) Go away received I0526 21:50:11.946487 6 log.go:172] (0xc0007bea50) (0xc00274c640) Create stream I0526 21:50:11.946513 6 log.go:172] (0xc0007bea50) (0xc00274c640) Stream added, broadcasting: 1 I0526 21:50:11.948067 6 log.go:172] (0xc0007bea50) Reply frame received for 1 I0526 21:50:11.948099 6 log.go:172] (0xc0007bea50) (0xc0019df680) Create stream I0526 21:50:11.948112 6 log.go:172] (0xc0007bea50) (0xc0019df680) Stream added, broadcasting: 3 I0526 21:50:11.949038 6 log.go:172] (0xc0007bea50) Reply frame received for 3 I0526 21:50:11.949094 6 log.go:172] (0xc0007bea50) (0xc00274c6e0) Create stream I0526 21:50:11.949272 6 log.go:172] (0xc0007bea50) (0xc00274c6e0) Stream added, broadcasting: 5 I0526 21:50:11.950226 6 log.go:172] (0xc0007bea50) Reply frame received for 5 I0526 21:50:12.025866 6 log.go:172] (0xc0007bea50) Data frame received for 3 I0526 21:50:12.025913 6 log.go:172] (0xc0019df680) (3) Data frame handling I0526 21:50:12.025928 6 log.go:172] (0xc0019df680) (3) Data frame sent I0526 21:50:12.025942 6 log.go:172] (0xc0007bea50) Data frame received for 3 I0526 21:50:12.025954 6 log.go:172] (0xc0019df680) (3) Data frame handling I0526 21:50:12.025971 6 log.go:172] (0xc0007bea50) Data frame received for 5 I0526 21:50:12.025984 6 log.go:172] (0xc00274c6e0) (5) Data frame handling I0526 21:50:12.027151 6 log.go:172] (0xc0007bea50) Data frame received for 1 I0526 21:50:12.027176 6 log.go:172] (0xc00274c640) (1) Data frame handling I0526 21:50:12.027193 6 log.go:172] (0xc00274c640) (1) Data frame sent I0526 21:50:12.027208 6 log.go:172] (0xc0007bea50) (0xc00274c640) Stream removed, broadcasting: 1 I0526 21:50:12.027223 6 log.go:172] (0xc0007bea50) Go away received I0526 21:50:12.027352 6 log.go:172] (0xc0007bea50) (0xc00274c640) Stream removed, broadcasting: 1 I0526 21:50:12.027376 6 log.go:172] (0xc0007bea50) (0xc0019df680) Stream removed, broadcasting: 3 I0526 21:50:12.027386 6 log.go:172] (0xc0007bea50) (0xc00274c6e0) Stream removed, broadcasting: 5 May 26 21:50:12.027: INFO: Exec stderr: "" STEP: Verifying /etc/hosts of container is not kubelet-managed since container specifies /etc/hosts mount May 26 21:50:12.027: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2588 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:50:12.027: INFO: >>> kubeConfig: /root/.kube/config I0526 21:50:12.053714 6 log.go:172] (0xc0025a6000) (0xc001aa7360) Create stream I0526 21:50:12.053742 6 log.go:172] (0xc0025a6000) (0xc001aa7360) Stream added, broadcasting: 1 I0526 21:50:12.055393 6 log.go:172] (0xc0025a6000) Reply frame received for 1 I0526 21:50:12.055426 6 log.go:172] (0xc0025a6000) (0xc0019df7c0) Create stream I0526 21:50:12.055438 6 log.go:172] (0xc0025a6000) (0xc0019df7c0) Stream added, broadcasting: 3 I0526 21:50:12.056179 6 log.go:172] (0xc0025a6000) Reply frame received for 3 I0526 21:50:12.056208 6 log.go:172] (0xc0025a6000) (0xc00274c780) Create stream I0526 21:50:12.056218 6 log.go:172] (0xc0025a6000) (0xc00274c780) Stream added, broadcasting: 5 I0526 21:50:12.056964 6 log.go:172] (0xc0025a6000) Reply frame received for 5 I0526 21:50:12.125412 6 log.go:172] (0xc0025a6000) Data frame received for 5 I0526 21:50:12.125453 6 log.go:172] (0xc00274c780) (5) Data frame handling I0526 21:50:12.125485 6 log.go:172] (0xc0025a6000) Data frame received for 3 I0526 21:50:12.125513 6 log.go:172] (0xc0019df7c0) (3) Data frame handling I0526 21:50:12.125530 6 log.go:172] (0xc0019df7c0) (3) Data frame sent I0526 21:50:12.125541 6 log.go:172] (0xc0025a6000) Data frame received for 3 I0526 21:50:12.125553 6 log.go:172] (0xc0019df7c0) (3) Data frame handling I0526 21:50:12.126510 6 log.go:172] (0xc0025a6000) Data frame received for 1 I0526 21:50:12.126541 6 log.go:172] (0xc001aa7360) (1) Data frame handling I0526 21:50:12.126729 6 log.go:172] (0xc001aa7360) (1) Data frame sent I0526 21:50:12.126758 6 log.go:172] (0xc0025a6000) (0xc001aa7360) Stream removed, broadcasting: 1 I0526 21:50:12.126836 6 log.go:172] (0xc0025a6000) Go away received I0526 21:50:12.126884 6 log.go:172] (0xc0025a6000) (0xc001aa7360) Stream removed, broadcasting: 1 I0526 21:50:12.126909 6 log.go:172] (0xc0025a6000) (0xc0019df7c0) Stream removed, broadcasting: 3 I0526 21:50:12.126927 6 log.go:172] (0xc0025a6000) (0xc00274c780) Stream removed, broadcasting: 5 May 26 21:50:12.126: INFO: Exec stderr: "" May 26 21:50:12.126: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2588 PodName:test-pod ContainerName:busybox-3 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:50:12.127: INFO: >>> kubeConfig: /root/.kube/config I0526 21:50:12.158782 6 log.go:172] (0xc001af5550) (0xc0019dfcc0) Create stream I0526 21:50:12.158823 6 log.go:172] (0xc001af5550) (0xc0019dfcc0) Stream added, broadcasting: 1 I0526 21:50:12.160840 6 log.go:172] (0xc001af5550) Reply frame received for 1 I0526 21:50:12.160880 6 log.go:172] (0xc001af5550) (0xc0016ef2c0) Create stream I0526 21:50:12.160895 6 log.go:172] (0xc001af5550) (0xc0016ef2c0) Stream added, broadcasting: 3 I0526 21:50:12.162115 6 log.go:172] (0xc001af5550) Reply frame received for 3 I0526 21:50:12.162156 6 log.go:172] (0xc001af5550) (0xc00274c820) Create stream I0526 21:50:12.162171 6 log.go:172] (0xc001af5550) (0xc00274c820) Stream added, broadcasting: 5 I0526 21:50:12.163165 6 log.go:172] (0xc001af5550) Reply frame received for 5 I0526 21:50:12.217854 6 log.go:172] (0xc001af5550) Data frame received for 5 I0526 21:50:12.217934 6 log.go:172] (0xc00274c820) (5) Data frame handling I0526 21:50:12.217968 6 log.go:172] (0xc001af5550) Data frame received for 3 I0526 21:50:12.217993 6 log.go:172] (0xc0016ef2c0) (3) Data frame handling I0526 21:50:12.218023 6 log.go:172] (0xc0016ef2c0) (3) Data frame sent I0526 21:50:12.218044 6 log.go:172] (0xc001af5550) Data frame received for 3 I0526 21:50:12.218056 6 log.go:172] (0xc0016ef2c0) (3) Data frame handling I0526 21:50:12.219656 6 log.go:172] (0xc001af5550) Data frame received for 1 I0526 21:50:12.219687 6 log.go:172] (0xc0019dfcc0) (1) Data frame handling I0526 21:50:12.219705 6 log.go:172] (0xc0019dfcc0) (1) Data frame sent I0526 21:50:12.219720 6 log.go:172] (0xc001af5550) (0xc0019dfcc0) Stream removed, broadcasting: 1 I0526 21:50:12.219742 6 log.go:172] (0xc001af5550) Go away received I0526 21:50:12.219891 6 log.go:172] (0xc001af5550) (0xc0019dfcc0) Stream removed, broadcasting: 1 I0526 21:50:12.219915 6 log.go:172] (0xc001af5550) (0xc0016ef2c0) Stream removed, broadcasting: 3 I0526 21:50:12.219925 6 log.go:172] (0xc001af5550) (0xc00274c820) Stream removed, broadcasting: 5 May 26 21:50:12.219: INFO: Exec stderr: "" STEP: Verifying /etc/hosts content of container is not kubelet-managed for pod with hostNetwork=true May 26 21:50:12.220: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2588 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:50:12.220: INFO: >>> kubeConfig: /root/.kube/config I0526 21:50:12.251962 6 log.go:172] (0xc0023aca50) (0xc0026dbae0) Create stream I0526 21:50:12.252005 6 log.go:172] (0xc0023aca50) (0xc0026dbae0) Stream added, broadcasting: 1 I0526 21:50:12.254039 6 log.go:172] (0xc0023aca50) Reply frame received for 1 I0526 21:50:12.254090 6 log.go:172] (0xc0023aca50) (0xc0016ef5e0) Create stream I0526 21:50:12.254105 6 log.go:172] (0xc0023aca50) (0xc0016ef5e0) Stream added, broadcasting: 3 I0526 21:50:12.255308 6 log.go:172] (0xc0023aca50) Reply frame received for 3 I0526 21:50:12.255354 6 log.go:172] (0xc0023aca50) (0xc0016ef680) Create stream I0526 21:50:12.255370 6 log.go:172] (0xc0023aca50) (0xc0016ef680) Stream added, broadcasting: 5 I0526 21:50:12.256530 6 log.go:172] (0xc0023aca50) Reply frame received for 5 I0526 21:50:12.312188 6 log.go:172] (0xc0023aca50) Data frame received for 3 I0526 21:50:12.312233 6 log.go:172] (0xc0016ef5e0) (3) Data frame handling I0526 21:50:12.312246 6 log.go:172] (0xc0016ef5e0) (3) Data frame sent I0526 21:50:12.312259 6 log.go:172] (0xc0023aca50) Data frame received for 3 I0526 21:50:12.312268 6 log.go:172] (0xc0016ef5e0) (3) Data frame handling I0526 21:50:12.312282 6 log.go:172] (0xc0023aca50) Data frame received for 5 I0526 21:50:12.312292 6 log.go:172] (0xc0016ef680) (5) Data frame handling I0526 21:50:12.313981 6 log.go:172] (0xc0023aca50) Data frame received for 1 I0526 21:50:12.314011 6 log.go:172] (0xc0026dbae0) (1) Data frame handling I0526 21:50:12.314030 6 log.go:172] (0xc0026dbae0) (1) Data frame sent I0526 21:50:12.314053 6 log.go:172] (0xc0023aca50) (0xc0026dbae0) Stream removed, broadcasting: 1 I0526 21:50:12.314073 6 log.go:172] (0xc0023aca50) Go away received I0526 21:50:12.314240 6 log.go:172] (0xc0023aca50) (0xc0026dbae0) Stream removed, broadcasting: 1 I0526 21:50:12.314262 6 log.go:172] (0xc0023aca50) (0xc0016ef5e0) Stream removed, broadcasting: 3 I0526 21:50:12.314271 6 log.go:172] (0xc0023aca50) (0xc0016ef680) Stream removed, broadcasting: 5 May 26 21:50:12.314: INFO: Exec stderr: "" May 26 21:50:12.314: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2588 PodName:test-host-network-pod ContainerName:busybox-1 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:50:12.314: INFO: >>> kubeConfig: /root/.kube/config I0526 21:50:12.347496 6 log.go:172] (0xc0023ad080) (0xc0026dbcc0) Create stream I0526 21:50:12.347534 6 log.go:172] (0xc0023ad080) (0xc0026dbcc0) Stream added, broadcasting: 1 I0526 21:50:12.349632 6 log.go:172] (0xc0023ad080) Reply frame received for 1 I0526 21:50:12.349659 6 log.go:172] (0xc0023ad080) (0xc00274c8c0) Create stream I0526 21:50:12.349670 6 log.go:172] (0xc0023ad080) (0xc00274c8c0) Stream added, broadcasting: 3 I0526 21:50:12.350549 6 log.go:172] (0xc0023ad080) Reply frame received for 3 I0526 21:50:12.350603 6 log.go:172] (0xc0023ad080) (0xc001aa74a0) Create stream I0526 21:50:12.350627 6 log.go:172] (0xc0023ad080) (0xc001aa74a0) Stream added, broadcasting: 5 I0526 21:50:12.351634 6 log.go:172] (0xc0023ad080) Reply frame received for 5 I0526 21:50:12.418841 6 log.go:172] (0xc0023ad080) Data frame received for 5 I0526 21:50:12.418889 6 log.go:172] (0xc001aa74a0) (5) Data frame handling I0526 21:50:12.418934 6 log.go:172] (0xc0023ad080) Data frame received for 3 I0526 21:50:12.418956 6 log.go:172] (0xc00274c8c0) (3) Data frame handling I0526 21:50:12.418968 6 log.go:172] (0xc00274c8c0) (3) Data frame sent I0526 21:50:12.418973 6 log.go:172] (0xc0023ad080) Data frame received for 3 I0526 21:50:12.418977 6 log.go:172] (0xc00274c8c0) (3) Data frame handling I0526 21:50:12.420677 6 log.go:172] (0xc0023ad080) Data frame received for 1 I0526 21:50:12.420711 6 log.go:172] (0xc0026dbcc0) (1) Data frame handling I0526 21:50:12.420735 6 log.go:172] (0xc0026dbcc0) (1) Data frame sent I0526 21:50:12.420765 6 log.go:172] (0xc0023ad080) (0xc0026dbcc0) Stream removed, broadcasting: 1 I0526 21:50:12.420793 6 log.go:172] (0xc0023ad080) Go away received I0526 21:50:12.420942 6 log.go:172] (0xc0023ad080) (0xc0026dbcc0) Stream removed, broadcasting: 1 I0526 21:50:12.420974 6 log.go:172] (0xc0023ad080) (0xc00274c8c0) Stream removed, broadcasting: 3 I0526 21:50:12.420996 6 log.go:172] (0xc0023ad080) (0xc001aa74a0) Stream removed, broadcasting: 5 May 26 21:50:12.421: INFO: Exec stderr: "" May 26 21:50:12.421: INFO: ExecWithOptions {Command:[cat /etc/hosts] Namespace:e2e-kubelet-etc-hosts-2588 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:50:12.421: INFO: >>> kubeConfig: /root/.kube/config I0526 21:50:12.458792 6 log.go:172] (0xc005948a50) (0xc0016efae0) Create stream I0526 21:50:12.458831 6 log.go:172] (0xc005948a50) (0xc0016efae0) Stream added, broadcasting: 1 I0526 21:50:12.461508 6 log.go:172] (0xc005948a50) Reply frame received for 1 I0526 21:50:12.461547 6 log.go:172] (0xc005948a50) (0xc001aa75e0) Create stream I0526 21:50:12.461583 6 log.go:172] (0xc005948a50) (0xc001aa75e0) Stream added, broadcasting: 3 I0526 21:50:12.462577 6 log.go:172] (0xc005948a50) Reply frame received for 3 I0526 21:50:12.462636 6 log.go:172] (0xc005948a50) (0xc001aa7720) Create stream I0526 21:50:12.462672 6 log.go:172] (0xc005948a50) (0xc001aa7720) Stream added, broadcasting: 5 I0526 21:50:12.463568 6 log.go:172] (0xc005948a50) Reply frame received for 5 I0526 21:50:12.516992 6 log.go:172] (0xc005948a50) Data frame received for 3 I0526 21:50:12.517048 6 log.go:172] (0xc001aa75e0) (3) Data frame handling I0526 21:50:12.517076 6 log.go:172] (0xc001aa75e0) (3) Data frame sent I0526 21:50:12.517088 6 log.go:172] (0xc005948a50) Data frame received for 3 I0526 21:50:12.517100 6 log.go:172] (0xc001aa75e0) (3) Data frame handling I0526 21:50:12.517325 6 log.go:172] (0xc005948a50) Data frame received for 5 I0526 21:50:12.517367 6 log.go:172] (0xc001aa7720) (5) Data frame handling I0526 21:50:12.518586 6 log.go:172] (0xc005948a50) Data frame received for 1 I0526 21:50:12.518609 6 log.go:172] (0xc0016efae0) (1) Data frame handling I0526 21:50:12.518633 6 log.go:172] (0xc0016efae0) (1) Data frame sent I0526 21:50:12.518657 6 log.go:172] (0xc005948a50) (0xc0016efae0) Stream removed, broadcasting: 1 I0526 21:50:12.518686 6 log.go:172] (0xc005948a50) Go away received I0526 21:50:12.518815 6 log.go:172] (0xc005948a50) (0xc0016efae0) Stream removed, broadcasting: 1 I0526 21:50:12.518854 6 log.go:172] (0xc005948a50) (0xc001aa75e0) Stream removed, broadcasting: 3 I0526 21:50:12.518885 6 log.go:172] (0xc005948a50) (0xc001aa7720) Stream removed, broadcasting: 5 May 26 21:50:12.518: INFO: Exec stderr: "" May 26 21:50:12.518: INFO: ExecWithOptions {Command:[cat /etc/hosts-original] Namespace:e2e-kubelet-etc-hosts-2588 PodName:test-host-network-pod ContainerName:busybox-2 Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 21:50:12.518: INFO: >>> kubeConfig: /root/.kube/config I0526 21:50:12.557988 6 log.go:172] (0xc0021f1e40) (0xc001aa7b80) Create stream I0526 21:50:12.558019 6 log.go:172] (0xc0021f1e40) (0xc001aa7b80) Stream added, broadcasting: 1 I0526 21:50:12.560562 6 log.go:172] (0xc0021f1e40) Reply frame received for 1 I0526 21:50:12.560621 6 log.go:172] (0xc0021f1e40) (0xc00274c960) Create stream I0526 21:50:12.560647 6 log.go:172] (0xc0021f1e40) (0xc00274c960) Stream added, broadcasting: 3 I0526 21:50:12.562106 6 log.go:172] (0xc0021f1e40) Reply frame received for 3 I0526 21:50:12.562145 6 log.go:172] (0xc0021f1e40) (0xc0016efb80) Create stream I0526 21:50:12.562160 6 log.go:172] (0xc0021f1e40) (0xc0016efb80) Stream added, broadcasting: 5 I0526 21:50:12.563108 6 log.go:172] (0xc0021f1e40) Reply frame received for 5 I0526 21:50:12.633982 6 log.go:172] (0xc0021f1e40) Data frame received for 5 I0526 21:50:12.634038 6 log.go:172] (0xc0016efb80) (5) Data frame handling I0526 21:50:12.634078 6 log.go:172] (0xc0021f1e40) Data frame received for 3 I0526 21:50:12.634096 6 log.go:172] (0xc00274c960) (3) Data frame handling I0526 21:50:12.634117 6 log.go:172] (0xc00274c960) (3) Data frame sent I0526 21:50:12.634135 6 log.go:172] (0xc0021f1e40) Data frame received for 3 I0526 21:50:12.634152 6 log.go:172] (0xc00274c960) (3) Data frame handling I0526 21:50:12.635403 6 log.go:172] (0xc0021f1e40) Data frame received for 1 I0526 21:50:12.635420 6 log.go:172] (0xc001aa7b80) (1) Data frame handling I0526 21:50:12.635432 6 log.go:172] (0xc001aa7b80) (1) Data frame sent I0526 21:50:12.635445 6 log.go:172] (0xc0021f1e40) (0xc001aa7b80) Stream removed, broadcasting: 1 I0526 21:50:12.635474 6 log.go:172] (0xc0021f1e40) Go away received I0526 21:50:12.635610 6 log.go:172] (0xc0021f1e40) (0xc001aa7b80) Stream removed, broadcasting: 1 I0526 21:50:12.635632 6 log.go:172] (0xc0021f1e40) (0xc00274c960) Stream removed, broadcasting: 3 I0526 21:50:12.635640 6 log.go:172] (0xc0021f1e40) (0xc0016efb80) Stream removed, broadcasting: 5 May 26 21:50:12.635: INFO: Exec stderr: "" [AfterEach] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:50:12.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "e2e-kubelet-etc-hosts-2588" for this suite. • [SLOW TEST:11.262 seconds] [k8s.io] KubeletManagedEtcHosts /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] KubeletManagedEtcHosts should test kubelet managed /etc/hosts file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":161,"skipped":2639,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Services should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:50:12.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service multi-endpoint-test in namespace services-1331 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1331 to expose endpoints map[] May 26 21:50:12.866: INFO: Get endpoints failed (23.109568ms elapsed, ignoring for 5s): endpoints "multi-endpoint-test" not found May 26 21:50:13.871: INFO: successfully validated that service multi-endpoint-test in namespace services-1331 exposes endpoints map[] (1.027452556s elapsed) STEP: Creating pod pod1 in namespace services-1331 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1331 to expose endpoints map[pod1:[100]] May 26 21:50:16.939: INFO: successfully validated that service multi-endpoint-test in namespace services-1331 exposes endpoints map[pod1:[100]] (3.060592503s elapsed) STEP: Creating pod pod2 in namespace services-1331 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1331 to expose endpoints map[pod1:[100] pod2:[101]] May 26 21:50:21.652: INFO: successfully validated that service multi-endpoint-test in namespace services-1331 exposes endpoints map[pod1:[100] pod2:[101]] (4.709278659s elapsed) STEP: Deleting pod pod1 in namespace services-1331 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1331 to expose endpoints map[pod2:[101]] May 26 21:50:22.703: INFO: successfully validated that service multi-endpoint-test in namespace services-1331 exposes endpoints map[pod2:[101]] (1.046029536s elapsed) STEP: Deleting pod pod2 in namespace services-1331 STEP: waiting up to 3m0s for service multi-endpoint-test in namespace services-1331 to expose endpoints map[] May 26 21:50:23.773: INFO: successfully validated that service multi-endpoint-test in namespace services-1331 exposes endpoints map[] (1.065091156s elapsed) [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:50:23.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-1331" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:11.217 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should serve multiport endpoints from pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should serve multiport endpoints from pods [Conformance]","total":278,"completed":162,"skipped":2679,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:50:23.867: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-73396b87-71ed-418a-ab8a-9ad196fb1ae6 in namespace container-probe-6699 May 26 21:50:28.309: INFO: Started pod liveness-73396b87-71ed-418a-ab8a-9ad196fb1ae6 in namespace container-probe-6699 STEP: checking the pod's current state and verifying that restartCount is present May 26 21:50:28.312: INFO: Initial restart count of pod liveness-73396b87-71ed-418a-ab8a-9ad196fb1ae6 is 0 May 26 21:50:52.444: INFO: Restart count of pod container-probe-6699/liveness-73396b87-71ed-418a-ab8a-9ad196fb1ae6 is now 1 (24.131814124s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:50:52.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-6699" for this suite. • [SLOW TEST:28.605 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should be restarted with a /healthz http liveness probe [NodeConformance] [Conformance]","total":278,"completed":163,"skipped":2703,"failed":0} S ------------------------------ [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:50:52.472: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service nodeport-service with the type=NodePort in namespace services-2916 STEP: Creating active service to test reachability when its FQDN is referred as externalName for another service STEP: creating service externalsvc in namespace services-2916 STEP: creating replication controller externalsvc in namespace services-2916 I0526 21:50:53.242373 6 runners.go:189] Created replication controller with name: externalsvc, namespace: services-2916, replica count: 2 I0526 21:50:56.292866 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 21:50:59.293371 6 runners.go:189] externalsvc Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady STEP: changing the NodePort service to type=ExternalName May 26 21:50:59.380: INFO: Creating new exec pod May 26 21:51:03.529: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-2916 execpod9b6lr -- /bin/sh -x -c nslookup nodeport-service' May 26 21:51:03.855: INFO: stderr: "I0526 21:51:03.670726 2324 log.go:172] (0xc000115130) (0xc0006a9a40) Create stream\nI0526 21:51:03.670799 2324 log.go:172] (0xc000115130) (0xc0006a9a40) Stream added, broadcasting: 1\nI0526 21:51:03.674327 2324 log.go:172] (0xc000115130) Reply frame received for 1\nI0526 21:51:03.674371 2324 log.go:172] (0xc000115130) (0xc000a36000) Create stream\nI0526 21:51:03.674387 2324 log.go:172] (0xc000115130) (0xc000a36000) Stream added, broadcasting: 3\nI0526 21:51:03.675379 2324 log.go:172] (0xc000115130) Reply frame received for 3\nI0526 21:51:03.675410 2324 log.go:172] (0xc000115130) (0xc0006a9c20) Create stream\nI0526 21:51:03.675420 2324 log.go:172] (0xc000115130) (0xc0006a9c20) Stream added, broadcasting: 5\nI0526 21:51:03.676345 2324 log.go:172] (0xc000115130) Reply frame received for 5\nI0526 21:51:03.750475 2324 log.go:172] (0xc000115130) Data frame received for 5\nI0526 21:51:03.750511 2324 log.go:172] (0xc0006a9c20) (5) Data frame handling\nI0526 21:51:03.750539 2324 log.go:172] (0xc0006a9c20) (5) Data frame sent\n+ nslookup nodeport-service\nI0526 21:51:03.844213 2324 log.go:172] (0xc000115130) Data frame received for 3\nI0526 21:51:03.844258 2324 log.go:172] (0xc000a36000) (3) Data frame handling\nI0526 21:51:03.844295 2324 log.go:172] (0xc000a36000) (3) Data frame sent\nI0526 21:51:03.845785 2324 log.go:172] (0xc000115130) Data frame received for 3\nI0526 21:51:03.845804 2324 log.go:172] (0xc000a36000) (3) Data frame handling\nI0526 21:51:03.845822 2324 log.go:172] (0xc000a36000) (3) Data frame sent\nI0526 21:51:03.846542 2324 log.go:172] (0xc000115130) Data frame received for 3\nI0526 21:51:03.846578 2324 log.go:172] (0xc000a36000) (3) Data frame handling\nI0526 21:51:03.846601 2324 log.go:172] (0xc000115130) Data frame received for 5\nI0526 21:51:03.846615 2324 log.go:172] (0xc0006a9c20) (5) Data frame handling\nI0526 21:51:03.848508 2324 log.go:172] (0xc000115130) Data frame received for 1\nI0526 21:51:03.848533 2324 log.go:172] (0xc0006a9a40) (1) Data frame handling\nI0526 21:51:03.848554 2324 log.go:172] (0xc0006a9a40) (1) Data frame sent\nI0526 21:51:03.848580 2324 log.go:172] (0xc000115130) (0xc0006a9a40) Stream removed, broadcasting: 1\nI0526 21:51:03.848602 2324 log.go:172] (0xc000115130) Go away received\nI0526 21:51:03.849021 2324 log.go:172] (0xc000115130) (0xc0006a9a40) Stream removed, broadcasting: 1\nI0526 21:51:03.849042 2324 log.go:172] (0xc000115130) (0xc000a36000) Stream removed, broadcasting: 3\nI0526 21:51:03.849053 2324 log.go:172] (0xc000115130) (0xc0006a9c20) Stream removed, broadcasting: 5\n" May 26 21:51:03.855: INFO: stdout: "Server:\t\t10.96.0.10\nAddress:\t10.96.0.10#53\n\nnodeport-service.services-2916.svc.cluster.local\tcanonical name = externalsvc.services-2916.svc.cluster.local.\nName:\texternalsvc.services-2916.svc.cluster.local\nAddress: 10.111.51.87\n\n" STEP: deleting ReplicationController externalsvc in namespace services-2916, will wait for the garbage collector to delete the pods May 26 21:51:03.964: INFO: Deleting ReplicationController externalsvc took: 55.624947ms May 26 21:51:04.265: INFO: Terminating ReplicationController externalsvc pods took: 300.420424ms May 26 21:51:19.611: INFO: Cleaning up the NodePort to ExternalName test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:51:19.625: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-2916" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:27.164 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from NodePort to ExternalName [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from NodePort to ExternalName [Conformance]","total":278,"completed":164,"skipped":2704,"failed":0} SSS ------------------------------ [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:51:19.636: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 21:51:19.721: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7d812574-5ab0-4235-ae4d-5f06aaf4890c" in namespace "downward-api-9421" to be "success or failure" May 26 21:51:19.755: INFO: Pod "downwardapi-volume-7d812574-5ab0-4235-ae4d-5f06aaf4890c": Phase="Pending", Reason="", readiness=false. Elapsed: 33.669839ms May 26 21:51:21.759: INFO: Pod "downwardapi-volume-7d812574-5ab0-4235-ae4d-5f06aaf4890c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.038101952s May 26 21:51:23.764: INFO: Pod "downwardapi-volume-7d812574-5ab0-4235-ae4d-5f06aaf4890c": Phase="Pending", Reason="", readiness=false. Elapsed: 4.042718234s May 26 21:51:25.767: INFO: Pod "downwardapi-volume-7d812574-5ab0-4235-ae4d-5f06aaf4890c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.046012372s STEP: Saw pod success May 26 21:51:25.767: INFO: Pod "downwardapi-volume-7d812574-5ab0-4235-ae4d-5f06aaf4890c" satisfied condition "success or failure" May 26 21:51:25.770: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-7d812574-5ab0-4235-ae4d-5f06aaf4890c container client-container: STEP: delete the pod May 26 21:51:25.831: INFO: Waiting for pod downwardapi-volume-7d812574-5ab0-4235-ae4d-5f06aaf4890c to disappear May 26 21:51:26.399: INFO: Pod downwardapi-volume-7d812574-5ab0-4235-ae4d-5f06aaf4890c no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:51:26.399: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-9421" for this suite. • [SLOW TEST:6.771 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":165,"skipped":2707,"failed":0} [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:51:26.407: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename security-context-test STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:39 [It] should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:51:26.879: INFO: Waiting up to 5m0s for pod "alpine-nnp-false-1288c4aa-3972-43e4-ac73-039a9d4f036b" in namespace "security-context-test-1179" to be "success or failure" May 26 21:51:26.926: INFO: Pod "alpine-nnp-false-1288c4aa-3972-43e4-ac73-039a9d4f036b": Phase="Pending", Reason="", readiness=false. Elapsed: 47.522006ms May 26 21:51:28.930: INFO: Pod "alpine-nnp-false-1288c4aa-3972-43e4-ac73-039a9d4f036b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.051456537s May 26 21:51:30.935: INFO: Pod "alpine-nnp-false-1288c4aa-3972-43e4-ac73-039a9d4f036b": Phase="Running", Reason="", readiness=true. Elapsed: 4.055762154s May 26 21:51:32.939: INFO: Pod "alpine-nnp-false-1288c4aa-3972-43e4-ac73-039a9d4f036b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.060309659s May 26 21:51:32.939: INFO: Pod "alpine-nnp-false-1288c4aa-3972-43e4-ac73-039a9d4f036b" satisfied condition "success or failure" [AfterEach] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:51:32.946: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "security-context-test-1179" for this suite. • [SLOW TEST:6.553 seconds] [k8s.io] Security Context /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when creating containers with AllowPrivilegeEscalation /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/security_context.go:289 should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Security Context when creating containers with AllowPrivilegeEscalation should not allow privilege escalation when false [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":166,"skipped":2707,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:51:32.960: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1681 [It] should create a job from an image when restart is OnFailure [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 26 21:51:33.001: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-job --restart=OnFailure --generator=job/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-7455' May 26 21:51:33.118: INFO: stderr: "kubectl run --generator=job/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 26 21:51:33.118: INFO: stdout: "job.batch/e2e-test-httpd-job created\n" STEP: verifying the job e2e-test-httpd-job was created [AfterEach] Kubectl run job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1686 May 26 21:51:33.208: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete jobs e2e-test-httpd-job --namespace=kubectl-7455' May 26 21:51:33.370: INFO: stderr: "" May 26 21:51:33.370: INFO: stdout: "job.batch \"e2e-test-httpd-job\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:51:33.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-7455" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run job should create a job from an image when restart is OnFailure [Conformance]","total":278,"completed":167,"skipped":2710,"failed":0} SSSSS ------------------------------ [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:51:33.378: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 21:51:33.478: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f367c1d6-f1a4-4ede-b25a-e2be90426dae" in namespace "downward-api-6596" to be "success or failure" May 26 21:51:33.494: INFO: Pod "downwardapi-volume-f367c1d6-f1a4-4ede-b25a-e2be90426dae": Phase="Pending", Reason="", readiness=false. Elapsed: 15.978742ms May 26 21:51:35.535: INFO: Pod "downwardapi-volume-f367c1d6-f1a4-4ede-b25a-e2be90426dae": Phase="Pending", Reason="", readiness=false. Elapsed: 2.056689268s May 26 21:51:37.539: INFO: Pod "downwardapi-volume-f367c1d6-f1a4-4ede-b25a-e2be90426dae": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.061012998s STEP: Saw pod success May 26 21:51:37.539: INFO: Pod "downwardapi-volume-f367c1d6-f1a4-4ede-b25a-e2be90426dae" satisfied condition "success or failure" May 26 21:51:37.542: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-f367c1d6-f1a4-4ede-b25a-e2be90426dae container client-container: STEP: delete the pod May 26 21:51:37.847: INFO: Waiting for pod downwardapi-volume-f367c1d6-f1a4-4ede-b25a-e2be90426dae to disappear May 26 21:51:37.857: INFO: Pod downwardapi-volume-f367c1d6-f1a4-4ede-b25a-e2be90426dae no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:51:37.857: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6596" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":168,"skipped":2715,"failed":0} SS ------------------------------ [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:51:37.870: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide pod UID as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 26 21:51:37.976: INFO: Waiting up to 5m0s for pod "downward-api-46cbfe6f-7040-4837-a705-61092811a7c3" in namespace "downward-api-6343" to be "success or failure" May 26 21:51:37.993: INFO: Pod "downward-api-46cbfe6f-7040-4837-a705-61092811a7c3": Phase="Pending", Reason="", readiness=false. Elapsed: 16.883355ms May 26 21:51:40.046: INFO: Pod "downward-api-46cbfe6f-7040-4837-a705-61092811a7c3": Phase="Pending", Reason="", readiness=false. Elapsed: 2.069985102s May 26 21:51:42.051: INFO: Pod "downward-api-46cbfe6f-7040-4837-a705-61092811a7c3": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.074347668s STEP: Saw pod success May 26 21:51:42.051: INFO: Pod "downward-api-46cbfe6f-7040-4837-a705-61092811a7c3" satisfied condition "success or failure" May 26 21:51:42.054: INFO: Trying to get logs from node jerma-worker pod downward-api-46cbfe6f-7040-4837-a705-61092811a7c3 container dapi-container: STEP: delete the pod May 26 21:51:42.071: INFO: Waiting for pod downward-api-46cbfe6f-7040-4837-a705-61092811a7c3 to disappear May 26 21:51:42.075: INFO: Pod downward-api-46cbfe6f-7040-4837-a705-61092811a7c3 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:51:42.075: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6343" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide pod UID as env vars [NodeConformance] [Conformance]","total":278,"completed":169,"skipped":2717,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:51:42.083: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 21:51:42.161: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3656994c-c1ad-4126-9232-6f55fd875ce2" in namespace "projected-2603" to be "success or failure" May 26 21:51:42.185: INFO: Pod "downwardapi-volume-3656994c-c1ad-4126-9232-6f55fd875ce2": Phase="Pending", Reason="", readiness=false. Elapsed: 23.750383ms May 26 21:51:44.394: INFO: Pod "downwardapi-volume-3656994c-c1ad-4126-9232-6f55fd875ce2": Phase="Pending", Reason="", readiness=false. Elapsed: 2.2327219s May 26 21:51:46.398: INFO: Pod "downwardapi-volume-3656994c-c1ad-4126-9232-6f55fd875ce2": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.2372696s STEP: Saw pod success May 26 21:51:46.398: INFO: Pod "downwardapi-volume-3656994c-c1ad-4126-9232-6f55fd875ce2" satisfied condition "success or failure" May 26 21:51:46.402: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3656994c-c1ad-4126-9232-6f55fd875ce2 container client-container: STEP: delete the pod May 26 21:51:46.448: INFO: Waiting for pod downwardapi-volume-3656994c-c1ad-4126-9232-6f55fd875ce2 to disappear May 26 21:51:46.504: INFO: Pod downwardapi-volume-3656994c-c1ad-4126-9232-6f55fd875ce2 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:51:46.504: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-2603" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu limit [NodeConformance] [Conformance]","total":278,"completed":170,"skipped":2724,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:51:46.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-408a47de-a153-4333-a376-148fbb46b241 STEP: Creating a pod to test consume configMaps May 26 21:51:46.665: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-95adebcc-279c-4635-8c88-73703fb12ec8" in namespace "projected-8637" to be "success or failure" May 26 21:51:46.704: INFO: Pod "pod-projected-configmaps-95adebcc-279c-4635-8c88-73703fb12ec8": Phase="Pending", Reason="", readiness=false. Elapsed: 39.05835ms May 26 21:51:48.758: INFO: Pod "pod-projected-configmaps-95adebcc-279c-4635-8c88-73703fb12ec8": Phase="Pending", Reason="", readiness=false. Elapsed: 2.092999022s May 26 21:51:50.920: INFO: Pod "pod-projected-configmaps-95adebcc-279c-4635-8c88-73703fb12ec8": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.255226006s STEP: Saw pod success May 26 21:51:50.921: INFO: Pod "pod-projected-configmaps-95adebcc-279c-4635-8c88-73703fb12ec8" satisfied condition "success or failure" May 26 21:51:50.924: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-95adebcc-279c-4635-8c88-73703fb12ec8 container projected-configmap-volume-test: STEP: delete the pod May 26 21:51:51.263: INFO: Waiting for pod pod-projected-configmaps-95adebcc-279c-4635-8c88-73703fb12ec8 to disappear May 26 21:51:51.276: INFO: Pod pod-projected-configmaps-95adebcc-279c-4635-8c88-73703fb12ec8 no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:51:51.276: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8637" for this suite. •{"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":171,"skipped":2733,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:51:51.283: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 21:51:51.840: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 21:51:53.852: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126711, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126711, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126711, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126711, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 21:51:56.890: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a mutating webhook configuration STEP: Updating a mutating webhook configuration's rules to not include the create operation STEP: Creating a configMap that should not be mutated STEP: Patching a mutating webhook configuration's rules to include the create operation STEP: Creating a configMap that should be mutated [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:51:57.052: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-70" for this suite. STEP: Destroying namespace "webhook-70-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:5.957 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a mutating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a mutating webhook should work [Conformance]","total":278,"completed":172,"skipped":2751,"failed":0} SSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:51:57.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 26 21:52:00.405: INFO: Expected: &{OK} to match Container's Termination Message: OK -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:52:00.439: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-5844" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from file when pod succeeds and TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":173,"skipped":2754,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:52:00.449: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] getting/updating/patching custom resource definition status sub-resource works [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:52:00.796: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:52:01.398: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-5690" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] Simple CustomResourceDefinition getting/updating/patching custom resource definition status sub-resource works [Conformance]","total":278,"completed":174,"skipped":2826,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:52:01.433: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-map-1d8ce0ec-7869-4880-b21c-070505949fb4 STEP: Creating a pod to test consume secrets May 26 21:52:01.570: INFO: Waiting up to 5m0s for pod "pod-secrets-bac5fc34-6744-4d7c-858d-a89df0da221a" in namespace "secrets-3847" to be "success or failure" May 26 21:52:01.574: INFO: Pod "pod-secrets-bac5fc34-6744-4d7c-858d-a89df0da221a": Phase="Pending", Reason="", readiness=false. Elapsed: 4.117202ms May 26 21:52:03.578: INFO: Pod "pod-secrets-bac5fc34-6744-4d7c-858d-a89df0da221a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008834029s May 26 21:52:05.621: INFO: Pod "pod-secrets-bac5fc34-6744-4d7c-858d-a89df0da221a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.051396589s STEP: Saw pod success May 26 21:52:05.621: INFO: Pod "pod-secrets-bac5fc34-6744-4d7c-858d-a89df0da221a" satisfied condition "success or failure" May 26 21:52:05.623: INFO: Trying to get logs from node jerma-worker pod pod-secrets-bac5fc34-6744-4d7c-858d-a89df0da221a container secret-volume-test: STEP: delete the pod May 26 21:52:05.705: INFO: Waiting for pod pod-secrets-bac5fc34-6744-4d7c-858d-a89df0da221a to disappear May 26 21:52:05.746: INFO: Pod pod-secrets-bac5fc34-6744-4d7c-858d-a89df0da221a no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:52:05.746: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-3847" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":175,"skipped":2845,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:52:05.754: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted STEP: wait for 30 seconds to see if the garbage collector mistakenly deletes the pods STEP: Gathering metrics W0526 21:52:45.896767 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 21:52:45.896: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:52:45.896: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-6534" for this suite. • [SLOW TEST:40.150 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should orphan pods created by rc if delete options say so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should orphan pods created by rc if delete options say so [Conformance]","total":278,"completed":176,"skipped":2882,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:52:45.905: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7775.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@kubernetes.default.svc.cluster.local;check="$$(dig +tcp +noall +answer +search kubernetes.default.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@kubernetes.default.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-7775.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 26 21:52:54.275: INFO: DNS probes using dns-7775/dns-test-cddee996-72e8-4e98-8a81-5614561de9a7 succeeded STEP: deleting the pod [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:52:54.663: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-7775" for this suite. • [SLOW TEST:8.842 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for the cluster [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for the cluster [Conformance]","total":278,"completed":177,"skipped":2892,"failed":0} SSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:52:54.747: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 21:52:57.226: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 21:52:59.236: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126777, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126777, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126777, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126777, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 21:53:02.329: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a validating webhook configuration STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Updating a validating webhook configuration's rules to not include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Patching a validating webhook configuration's rules to include the create operation STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:53:02.457: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8540" for this suite. STEP: Destroying namespace "webhook-8540-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:7.896 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 patching/updating a validating webhook should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] patching/updating a validating webhook should work [Conformance]","total":278,"completed":178,"skipped":2897,"failed":0} SSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:53:02.644: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Discovering how many secrets are in namespace by default STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Secret STEP: Ensuring resource quota status captures secret creation STEP: Deleting a secret STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:53:19.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-4683" for this suite. • [SLOW TEST:17.165 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a secret. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a secret. [Conformance]","total":278,"completed":179,"skipped":2916,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:53:19.810: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-secret-zb2n STEP: Creating a pod to test atomic-volume-subpath May 26 21:53:19.923: INFO: Waiting up to 5m0s for pod "pod-subpath-test-secret-zb2n" in namespace "subpath-2464" to be "success or failure" May 26 21:53:19.927: INFO: Pod "pod-subpath-test-secret-zb2n": Phase="Pending", Reason="", readiness=false. Elapsed: 4.146527ms May 26 21:53:21.931: INFO: Pod "pod-subpath-test-secret-zb2n": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008222911s May 26 21:53:23.935: INFO: Pod "pod-subpath-test-secret-zb2n": Phase="Running", Reason="", readiness=true. Elapsed: 4.012066496s May 26 21:53:25.940: INFO: Pod "pod-subpath-test-secret-zb2n": Phase="Running", Reason="", readiness=true. Elapsed: 6.016560819s May 26 21:53:27.944: INFO: Pod "pod-subpath-test-secret-zb2n": Phase="Running", Reason="", readiness=true. Elapsed: 8.02062901s May 26 21:53:29.948: INFO: Pod "pod-subpath-test-secret-zb2n": Phase="Running", Reason="", readiness=true. Elapsed: 10.024734178s May 26 21:53:31.952: INFO: Pod "pod-subpath-test-secret-zb2n": Phase="Running", Reason="", readiness=true. Elapsed: 12.02863792s May 26 21:53:33.956: INFO: Pod "pod-subpath-test-secret-zb2n": Phase="Running", Reason="", readiness=true. Elapsed: 14.032403812s May 26 21:53:35.960: INFO: Pod "pod-subpath-test-secret-zb2n": Phase="Running", Reason="", readiness=true. Elapsed: 16.036981844s May 26 21:53:37.964: INFO: Pod "pod-subpath-test-secret-zb2n": Phase="Running", Reason="", readiness=true. Elapsed: 18.04104048s May 26 21:53:39.969: INFO: Pod "pod-subpath-test-secret-zb2n": Phase="Running", Reason="", readiness=true. Elapsed: 20.045712677s May 26 21:53:41.972: INFO: Pod "pod-subpath-test-secret-zb2n": Phase="Running", Reason="", readiness=true. Elapsed: 22.049373295s May 26 21:53:43.977: INFO: Pod "pod-subpath-test-secret-zb2n": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.053803655s STEP: Saw pod success May 26 21:53:43.977: INFO: Pod "pod-subpath-test-secret-zb2n" satisfied condition "success or failure" May 26 21:53:43.980: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-secret-zb2n container test-container-subpath-secret-zb2n: STEP: delete the pod May 26 21:53:44.049: INFO: Waiting for pod pod-subpath-test-secret-zb2n to disappear May 26 21:53:44.090: INFO: Pod pod-subpath-test-secret-zb2n no longer exists STEP: Deleting pod pod-subpath-test-secret-zb2n May 26 21:53:44.090: INFO: Deleting pod "pod-subpath-test-secret-zb2n" in namespace "subpath-2464" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:53:44.093: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-2464" for this suite. • [SLOW TEST:24.292 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with secret pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with secret pod [LinuxOnly] [Conformance]","total":278,"completed":180,"skipped":2928,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:53:44.102: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for the rc to be deleted May 26 21:53:51.959: INFO: 0 pods remaining May 26 21:53:51.959: INFO: 0 pods has nil DeletionTimestamp May 26 21:53:51.959: INFO: May 26 21:53:52.497: INFO: 0 pods remaining May 26 21:53:52.497: INFO: 0 pods has nil DeletionTimestamp May 26 21:53:52.497: INFO: STEP: Gathering metrics W0526 21:53:53.336722 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 21:53:53.336: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:53:53.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-397" for this suite. • [SLOW TEST:9.241 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should keep the rc around until all its pods are deleted if the deleteOptions says so [Conformance]","total":278,"completed":181,"skipped":2958,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:53:53.344: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 21:53:56.939: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 21:53:59.075: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126836, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126836, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126837, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126836, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 21:54:02.129: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the admissionregistration.k8s.io API group in the /apis discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/admissionregistration.k8s.io discovery document STEP: finding the admissionregistration.k8s.io/v1 API group/version in the /apis/admissionregistration.k8s.io discovery document STEP: fetching the /apis/admissionregistration.k8s.io/v1 discovery document STEP: finding mutatingwebhookconfigurations and validatingwebhookconfigurations resources in the /apis/admissionregistration.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:54:02.136: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-5315" for this suite. STEP: Destroying namespace "webhook-5315-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.904 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should include webhook resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should include webhook resources in discovery documents [Conformance]","total":278,"completed":182,"skipped":2985,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:54:02.248: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-9797e936-a30e-4eb7-9db1-46c2263134f2 STEP: Creating a pod to test consume secrets May 26 21:54:02.362: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-469b749f-3a8c-46c5-888c-62866a158d34" in namespace "projected-4780" to be "success or failure" May 26 21:54:02.366: INFO: Pod "pod-projected-secrets-469b749f-3a8c-46c5-888c-62866a158d34": Phase="Pending", Reason="", readiness=false. Elapsed: 3.988281ms May 26 21:54:04.369: INFO: Pod "pod-projected-secrets-469b749f-3a8c-46c5-888c-62866a158d34": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007442047s May 26 21:54:06.419: INFO: Pod "pod-projected-secrets-469b749f-3a8c-46c5-888c-62866a158d34": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.057335414s STEP: Saw pod success May 26 21:54:06.419: INFO: Pod "pod-projected-secrets-469b749f-3a8c-46c5-888c-62866a158d34" satisfied condition "success or failure" May 26 21:54:06.422: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-469b749f-3a8c-46c5-888c-62866a158d34 container projected-secret-volume-test: STEP: delete the pod May 26 21:54:06.488: INFO: Waiting for pod pod-projected-secrets-469b749f-3a8c-46c5-888c-62866a158d34 to disappear May 26 21:54:06.640: INFO: Pod pod-projected-secrets-469b749f-3a8c-46c5-888c-62866a158d34 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:54:06.640: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4780" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings [NodeConformance] [Conformance]","total":278,"completed":183,"skipped":2994,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:54:06.649: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5367.svc.cluster.local)" && echo OK > /results/wheezy_hosts@dns-querier-2.dns-test-service-2.dns-5367.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/wheezy_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5367.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do test -n "$$(getent hosts dns-querier-2.dns-test-service-2.dns-5367.svc.cluster.local)" && echo OK > /results/jessie_hosts@dns-querier-2.dns-test-service-2.dns-5367.svc.cluster.local;test -n "$$(getent hosts dns-querier-2)" && echo OK > /results/jessie_hosts@dns-querier-2;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-5367.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 26 21:54:12.806: INFO: DNS probes using dns-5367/dns-test-e7f292ef-302a-4d1e-ac6d-a701f38d394f succeeded STEP: deleting the pod STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:54:12.929: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-5367" for this suite. • [SLOW TEST:6.290 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for pods for Hostname [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for pods for Hostname [LinuxOnly] [Conformance]","total":278,"completed":184,"skipped":3010,"failed":0} SSSS ------------------------------ [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:54:12.939: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:55:13.094: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1319" for this suite. • [SLOW TEST:60.176 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container with readiness probe that fails should never be ready and never restart [NodeConformance] [Conformance]","total":278,"completed":185,"skipped":3014,"failed":0} SS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:55:13.116: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Succeeded STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 26 21:55:17.476: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:55:17.521: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-2768" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] if TerminationMessagePath is set as non-root user and at a non-default path [NodeConformance] [Conformance]","total":278,"completed":186,"skipped":3016,"failed":0} S ------------------------------ [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:55:17.527: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating projection with secret that has name projected-secret-test-map-caf8f2b3-6b2d-463d-bbe9-ab6f5fd3438f STEP: Creating a pod to test consume secrets May 26 21:55:17.625: INFO: Waiting up to 5m0s for pod "pod-projected-secrets-bb214480-f403-4cd8-be8d-3e430d075a31" in namespace "projected-336" to be "success or failure" May 26 21:55:17.906: INFO: Pod "pod-projected-secrets-bb214480-f403-4cd8-be8d-3e430d075a31": Phase="Pending", Reason="", readiness=false. Elapsed: 280.388387ms May 26 21:55:19.910: INFO: Pod "pod-projected-secrets-bb214480-f403-4cd8-be8d-3e430d075a31": Phase="Pending", Reason="", readiness=false. Elapsed: 2.285020123s May 26 21:55:21.915: INFO: Pod "pod-projected-secrets-bb214480-f403-4cd8-be8d-3e430d075a31": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.289666018s STEP: Saw pod success May 26 21:55:21.915: INFO: Pod "pod-projected-secrets-bb214480-f403-4cd8-be8d-3e430d075a31" satisfied condition "success or failure" May 26 21:55:21.918: INFO: Trying to get logs from node jerma-worker2 pod pod-projected-secrets-bb214480-f403-4cd8-be8d-3e430d075a31 container projected-secret-volume-test: STEP: delete the pod May 26 21:55:21.951: INFO: Waiting for pod pod-projected-secrets-bb214480-f403-4cd8-be8d-3e430d075a31 to disappear May 26 21:55:21.960: INFO: Pod pod-projected-secrets-bb214480-f403-4cd8-be8d-3e430d075a31 no longer exists [AfterEach] [sig-storage] Projected secret /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:55:21.960: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-336" for this suite. •{"msg":"PASSED [sig-storage] Projected secret should be consumable from pods in volume with mappings and Item Mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":187,"skipped":3017,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:55:21.968: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 21:55:22.434: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3844153f-a152-4fdf-ad4d-91cdba73e649" in namespace "downward-api-7160" to be "success or failure" May 26 21:55:22.440: INFO: Pod "downwardapi-volume-3844153f-a152-4fdf-ad4d-91cdba73e649": Phase="Pending", Reason="", readiness=false. Elapsed: 5.20897ms May 26 21:55:24.444: INFO: Pod "downwardapi-volume-3844153f-a152-4fdf-ad4d-91cdba73e649": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009358735s May 26 21:55:26.448: INFO: Pod "downwardapi-volume-3844153f-a152-4fdf-ad4d-91cdba73e649": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01359516s STEP: Saw pod success May 26 21:55:26.448: INFO: Pod "downwardapi-volume-3844153f-a152-4fdf-ad4d-91cdba73e649" satisfied condition "success or failure" May 26 21:55:26.451: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-3844153f-a152-4fdf-ad4d-91cdba73e649 container client-container: STEP: delete the pod May 26 21:55:26.489: INFO: Waiting for pod downwardapi-volume-3844153f-a152-4fdf-ad4d-91cdba73e649 to disappear May 26 21:55:26.582: INFO: Pod downwardapi-volume-3844153f-a152-4fdf-ad4d-91cdba73e649 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:55:26.582: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-7160" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":188,"skipped":3032,"failed":0} SSSSSS ------------------------------ [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:55:26.633: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name projected-configmap-test-volume-20c81423-214c-4a5d-bbb1-6ffcd4927b2d STEP: Creating a pod to test consume configMaps May 26 21:55:26.735: INFO: Waiting up to 5m0s for pod "pod-projected-configmaps-48c60069-6e84-4c73-93db-853c29da13be" in namespace "projected-8526" to be "success or failure" May 26 21:55:26.739: INFO: Pod "pod-projected-configmaps-48c60069-6e84-4c73-93db-853c29da13be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.00384ms May 26 21:55:29.002: INFO: Pod "pod-projected-configmaps-48c60069-6e84-4c73-93db-853c29da13be": Phase="Pending", Reason="", readiness=false. Elapsed: 2.266860297s May 26 21:55:31.006: INFO: Pod "pod-projected-configmaps-48c60069-6e84-4c73-93db-853c29da13be": Phase="Pending", Reason="", readiness=false. Elapsed: 4.271363686s May 26 21:55:33.010: INFO: Pod "pod-projected-configmaps-48c60069-6e84-4c73-93db-853c29da13be": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.27556333s STEP: Saw pod success May 26 21:55:33.010: INFO: Pod "pod-projected-configmaps-48c60069-6e84-4c73-93db-853c29da13be" satisfied condition "success or failure" May 26 21:55:33.014: INFO: Trying to get logs from node jerma-worker pod pod-projected-configmaps-48c60069-6e84-4c73-93db-853c29da13be container projected-configmap-volume-test: STEP: delete the pod May 26 21:55:33.054: INFO: Waiting for pod pod-projected-configmaps-48c60069-6e84-4c73-93db-853c29da13be to disappear May 26 21:55:33.067: INFO: Pod pod-projected-configmaps-48c60069-6e84-4c73-93db-853c29da13be no longer exists [AfterEach] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:55:33.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8526" for this suite. • [SLOW TEST:6.441 seconds] [sig-storage] Projected configMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_configmap.go:34 should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected configMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":189,"skipped":3038,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:55:33.075: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 21:55:33.642: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 21:55:35.653: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126933, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126933, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126933, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726126933, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 21:55:38.674: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:55:38.678: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the mutating webhook for custom resource e2e-test-webhook-8278-crds.webhook.example.com via the AdmissionRegistration API STEP: Creating a custom resource while v1 is storage version STEP: Patching Custom Resource Definition to set v2 as storage STEP: Patching the custom resource while v2 is storage version [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:55:39.939: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9252" for this suite. STEP: Destroying namespace "webhook-9252-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.948 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should mutate custom resource with different stored version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should mutate custom resource with different stored version [Conformance]","total":278,"completed":190,"skipped":3087,"failed":0} SS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:55:40.023: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-384cdc6f-6c84-4b88-bec7-5af53993ac18 STEP: Creating a pod to test consume configMaps May 26 21:55:40.123: INFO: Waiting up to 5m0s for pod "pod-configmaps-c8ba2a52-7521-4fae-ab0d-60011d3e4501" in namespace "configmap-76" to be "success or failure" May 26 21:55:40.129: INFO: Pod "pod-configmaps-c8ba2a52-7521-4fae-ab0d-60011d3e4501": Phase="Pending", Reason="", readiness=false. Elapsed: 5.821017ms May 26 21:55:42.132: INFO: Pod "pod-configmaps-c8ba2a52-7521-4fae-ab0d-60011d3e4501": Phase="Pending", Reason="", readiness=false. Elapsed: 2.009423748s May 26 21:55:44.136: INFO: Pod "pod-configmaps-c8ba2a52-7521-4fae-ab0d-60011d3e4501": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.013093108s STEP: Saw pod success May 26 21:55:44.136: INFO: Pod "pod-configmaps-c8ba2a52-7521-4fae-ab0d-60011d3e4501" satisfied condition "success or failure" May 26 21:55:44.139: INFO: Trying to get logs from node jerma-worker pod pod-configmaps-c8ba2a52-7521-4fae-ab0d-60011d3e4501 container configmap-volume-test: STEP: delete the pod May 26 21:55:44.194: INFO: Waiting for pod pod-configmaps-c8ba2a52-7521-4fae-ab0d-60011d3e4501 to disappear May 26 21:55:44.265: INFO: Pod pod-configmaps-c8ba2a52-7521-4fae-ab0d-60011d3e4501 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:55:44.265: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-76" for this suite. •{"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":191,"skipped":3089,"failed":0} SSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:55:44.289: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-280062f0-6a56-4c65-8fe9-4c65849ed75c STEP: Creating a pod to test consume secrets May 26 21:55:44.440: INFO: Waiting up to 5m0s for pod "pod-secrets-97e2a3a4-b2fa-448f-b20d-3e011230dcda" in namespace "secrets-777" to be "success or failure" May 26 21:55:44.463: INFO: Pod "pod-secrets-97e2a3a4-b2fa-448f-b20d-3e011230dcda": Phase="Pending", Reason="", readiness=false. Elapsed: 22.405976ms May 26 21:55:46.496: INFO: Pod "pod-secrets-97e2a3a4-b2fa-448f-b20d-3e011230dcda": Phase="Pending", Reason="", readiness=false. Elapsed: 2.055967616s May 26 21:55:48.500: INFO: Pod "pod-secrets-97e2a3a4-b2fa-448f-b20d-3e011230dcda": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.06021815s STEP: Saw pod success May 26 21:55:48.500: INFO: Pod "pod-secrets-97e2a3a4-b2fa-448f-b20d-3e011230dcda" satisfied condition "success or failure" May 26 21:55:48.504: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-97e2a3a4-b2fa-448f-b20d-3e011230dcda container secret-volume-test: STEP: delete the pod May 26 21:55:48.529: INFO: Waiting for pod pod-secrets-97e2a3a4-b2fa-448f-b20d-3e011230dcda to disappear May 26 21:55:48.534: INFO: Pod pod-secrets-97e2a3a4-b2fa-448f-b20d-3e011230dcda no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:55:48.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-777" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume with defaultMode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":192,"skipped":3092,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:55:48.565: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename namespaces STEP: Waiting for a default service account to be provisioned in namespace [It] should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test namespace STEP: Waiting for a default service account to be provisioned in namespace STEP: Creating a pod in the namespace STEP: Waiting for the pod to have running status STEP: Deleting the namespace STEP: Waiting for the namespace to be removed. STEP: Recreating the namespace STEP: Verifying there are no pods in the namespace [AfterEach] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:56:20.079: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "namespaces-7710" for this suite. STEP: Destroying namespace "nsdeletetest-4997" for this suite. May 26 21:56:20.098: INFO: Namespace nsdeletetest-4997 was already deleted STEP: Destroying namespace "nsdeletetest-764" for this suite. • [SLOW TEST:31.538 seconds] [sig-api-machinery] Namespaces [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should ensure that all pods are removed when a namespace is deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Namespaces [Serial] should ensure that all pods are removed when a namespace is deleted [Conformance]","total":278,"completed":193,"skipped":3098,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:56:20.104: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should surface a failure condition on a common issue like exceeded quota [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 21:56:20.148: INFO: Creating quota "condition-test" that allows only two pods to run in the current namespace STEP: Creating rc "condition-test" that asks for more than the allowed pod quota STEP: Checking rc "condition-test" has the desired failure condition set STEP: Scaling down rc "condition-test" to satisfy pod quota May 26 21:56:22.330: INFO: Updating replication controller "condition-test" STEP: Checking rc "condition-test" has no failure condition set [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:56:23.626: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-6296" for this suite. •{"msg":"PASSED [sig-apps] ReplicationController should surface a failure condition on a common issue like exceeded quota [Conformance]","total":278,"completed":194,"skipped":3112,"failed":0} SSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:56:23.938: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 21:56:24.544: INFO: Waiting up to 5m0s for pod "downwardapi-volume-71b592a7-b65b-456f-a76e-9030f029ea15" in namespace "projected-91" to be "success or failure" May 26 21:56:24.732: INFO: Pod "downwardapi-volume-71b592a7-b65b-456f-a76e-9030f029ea15": Phase="Pending", Reason="", readiness=false. Elapsed: 187.707291ms May 26 21:56:26.775: INFO: Pod "downwardapi-volume-71b592a7-b65b-456f-a76e-9030f029ea15": Phase="Pending", Reason="", readiness=false. Elapsed: 2.231041072s May 26 21:56:28.779: INFO: Pod "downwardapi-volume-71b592a7-b65b-456f-a76e-9030f029ea15": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.235005556s STEP: Saw pod success May 26 21:56:28.779: INFO: Pod "downwardapi-volume-71b592a7-b65b-456f-a76e-9030f029ea15" satisfied condition "success or failure" May 26 21:56:28.782: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-71b592a7-b65b-456f-a76e-9030f029ea15 container client-container: STEP: delete the pod May 26 21:56:28.907: INFO: Waiting for pod downwardapi-volume-71b592a7-b65b-456f-a76e-9030f029ea15 to disappear May 26 21:56:28.910: INFO: Pod downwardapi-volume-71b592a7-b65b-456f-a76e-9030f029ea15 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:56:28.910: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-91" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":195,"skipped":3119,"failed":0} S ------------------------------ [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:56:28.917: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc STEP: delete the rc STEP: wait for all pods to be garbage collected STEP: Gathering metrics W0526 21:56:39.086417 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 21:56:39.086: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:56:39.086: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-5768" for this suite. • [SLOW TEST:10.176 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should delete pods created by rc when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should delete pods created by rc when not orphaning [Conformance]","total":278,"completed":196,"skipped":3120,"failed":0} S ------------------------------ [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:56:39.094: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-probe STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/container_probe.go:51 [It] should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod liveness-34d1410d-4199-4e79-b507-5d89954d5421 in namespace container-probe-1525 May 26 21:56:43.170: INFO: Started pod liveness-34d1410d-4199-4e79-b507-5d89954d5421 in namespace container-probe-1525 STEP: checking the pod's current state and verifying that restartCount is present May 26 21:56:43.173: INFO: Initial restart count of pod liveness-34d1410d-4199-4e79-b507-5d89954d5421 is 0 May 26 21:56:57.211: INFO: Restart count of pod container-probe-1525/liveness-34d1410d-4199-4e79-b507-5d89954d5421 is now 1 (14.038281769s elapsed) May 26 21:57:17.265: INFO: Restart count of pod container-probe-1525/liveness-34d1410d-4199-4e79-b507-5d89954d5421 is now 2 (34.092341578s elapsed) May 26 21:57:37.325: INFO: Restart count of pod container-probe-1525/liveness-34d1410d-4199-4e79-b507-5d89954d5421 is now 3 (54.152492979s elapsed) May 26 21:57:57.383: INFO: Restart count of pod container-probe-1525/liveness-34d1410d-4199-4e79-b507-5d89954d5421 is now 4 (1m14.210946841s elapsed) May 26 21:59:05.782: INFO: Restart count of pod container-probe-1525/liveness-34d1410d-4199-4e79-b507-5d89954d5421 is now 5 (2m22.609900032s elapsed) STEP: deleting the pod [AfterEach] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:59:05.801: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-probe-1525" for this suite. • [SLOW TEST:146.718 seconds] [k8s.io] Probing container /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should have monotonically increasing restart count [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Probing container should have monotonically increasing restart count [NodeConformance] [Conformance]","total":278,"completed":197,"skipped":3121,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:59:05.812: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-08050410-6d8c-43b9-abd8-8c41dbe2d788 STEP: Creating a pod to test consume secrets May 26 21:59:05.914: INFO: Waiting up to 5m0s for pod "pod-secrets-7acae2d3-b374-4c0e-891c-d33de3423b5e" in namespace "secrets-9542" to be "success or failure" May 26 21:59:05.974: INFO: Pod "pod-secrets-7acae2d3-b374-4c0e-891c-d33de3423b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 59.525417ms May 26 21:59:07.997: INFO: Pod "pod-secrets-7acae2d3-b374-4c0e-891c-d33de3423b5e": Phase="Pending", Reason="", readiness=false. Elapsed: 2.082812314s May 26 21:59:10.002: INFO: Pod "pod-secrets-7acae2d3-b374-4c0e-891c-d33de3423b5e": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.087593675s STEP: Saw pod success May 26 21:59:10.002: INFO: Pod "pod-secrets-7acae2d3-b374-4c0e-891c-d33de3423b5e" satisfied condition "success or failure" May 26 21:59:10.004: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-7acae2d3-b374-4c0e-891c-d33de3423b5e container secret-volume-test: STEP: delete the pod May 26 21:59:10.212: INFO: Waiting for pod pod-secrets-7acae2d3-b374-4c0e-891c-d33de3423b5e to disappear May 26 21:59:10.224: INFO: Pod pod-secrets-7acae2d3-b374-4c0e-891c-d33de3423b5e no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:59:10.224: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-9542" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be consumable from pods in volume [NodeConformance] [Conformance]","total":278,"completed":198,"skipped":3139,"failed":0} SSSSSS ------------------------------ [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:59:10.231: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to start watching from a specific resource version [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a new configmap STEP: modifying the configmap once STEP: modifying the configmap a second time STEP: deleting the configmap STEP: creating a watch on configmaps from the resource version returned by the first update STEP: Expecting to observe notifications for all changes to the configmap after the first update May 26 21:59:10.401: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9431 /api/v1/namespaces/watch-9431/configmaps/e2e-watch-test-resource-version b359ca21-8084-4716-bb3f-f0b136041910 19390322 0 2020-05-26 21:59:10 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 26 21:59:10.401: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-resource-version watch-9431 /api/v1/namespaces/watch-9431/configmaps/e2e-watch-test-resource-version b359ca21-8084-4716-bb3f-f0b136041910 19390323 0 2020-05-26 21:59:10 +0000 UTC map[watch-this-configmap:from-resource-version] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:59:10.401: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-9431" for this suite. •{"msg":"PASSED [sig-api-machinery] Watchers should be able to start watching from a specific resource version [Conformance]","total":278,"completed":199,"skipped":3145,"failed":0} SS ------------------------------ [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:59:10.429: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the rc1 STEP: create the rc2 STEP: set half of pods created by rc simpletest-rc-to-be-deleted to have rc simpletest-rc-to-stay as owner as well STEP: delete the rc simpletest-rc-to-be-deleted STEP: wait for the rc to be deleted STEP: Gathering metrics W0526 21:59:22.542393 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 21:59:22.542: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:59:22.542: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-3075" for this suite. • [SLOW TEST:12.122 seconds] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Garbage collector should not delete dependents that have both valid owner and owner that's waiting for dependents to be deleted [Conformance]","total":278,"completed":200,"skipped":3147,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:59:22.551: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1754 [It] should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 26 21:59:22.612: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --restart=Never --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-899' May 26 21:59:25.623: INFO: stderr: "" May 26 21:59:25.623: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod was created [AfterEach] Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1759 May 26 21:59:25.670: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-899' May 26 21:59:30.143: INFO: stderr: "" May 26 21:59:30.143: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:59:30.143: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-899" for this suite. • [SLOW TEST:7.605 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run pod /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1750 should create a pod from an image when restart is Never [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run pod should create a pod from an image when restart is Never [Conformance]","total":278,"completed":201,"skipped":3168,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:59:30.156: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename job STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a job STEP: Ensuring active pods == parallelism STEP: Orphaning one of the Job's Pods May 26 21:59:36.816: INFO: Successfully updated pod "adopt-release-99x6w" STEP: Checking that the Job readopts the Pod May 26 21:59:36.816: INFO: Waiting up to 15m0s for pod "adopt-release-99x6w" in namespace "job-6939" to be "adopted" May 26 21:59:36.879: INFO: Pod "adopt-release-99x6w": Phase="Running", Reason="", readiness=true. Elapsed: 63.400264ms May 26 21:59:38.891: INFO: Pod "adopt-release-99x6w": Phase="Running", Reason="", readiness=true. Elapsed: 2.075191758s May 26 21:59:38.891: INFO: Pod "adopt-release-99x6w" satisfied condition "adopted" STEP: Removing the labels from the Job's Pod May 26 21:59:39.400: INFO: Successfully updated pod "adopt-release-99x6w" STEP: Checking that the Job releases the Pod May 26 21:59:39.400: INFO: Waiting up to 15m0s for pod "adopt-release-99x6w" in namespace "job-6939" to be "released" May 26 21:59:39.441: INFO: Pod "adopt-release-99x6w": Phase="Running", Reason="", readiness=true. Elapsed: 40.761071ms May 26 21:59:41.445: INFO: Pod "adopt-release-99x6w": Phase="Running", Reason="", readiness=true. Elapsed: 2.045239485s May 26 21:59:41.445: INFO: Pod "adopt-release-99x6w" satisfied condition "released" [AfterEach] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:59:41.446: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "job-6939" for this suite. • [SLOW TEST:11.298 seconds] [sig-apps] Job /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching orphans and release non-matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Job should adopt matching orphans and release non-matching pods [Conformance]","total":278,"completed":202,"skipped":3182,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:59:41.454: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory limit [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 21:59:41.701: INFO: Waiting up to 5m0s for pod "downwardapi-volume-c61c4ac9-0bd4-4024-ac52-ff394d0b7744" in namespace "projected-8832" to be "success or failure" May 26 21:59:41.720: INFO: Pod "downwardapi-volume-c61c4ac9-0bd4-4024-ac52-ff394d0b7744": Phase="Pending", Reason="", readiness=false. Elapsed: 18.203683ms May 26 21:59:43.724: INFO: Pod "downwardapi-volume-c61c4ac9-0bd4-4024-ac52-ff394d0b7744": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022531836s May 26 21:59:45.728: INFO: Pod "downwardapi-volume-c61c4ac9-0bd4-4024-ac52-ff394d0b7744": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026265162s STEP: Saw pod success May 26 21:59:45.728: INFO: Pod "downwardapi-volume-c61c4ac9-0bd4-4024-ac52-ff394d0b7744" satisfied condition "success or failure" May 26 21:59:45.730: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-c61c4ac9-0bd4-4024-ac52-ff394d0b7744 container client-container: STEP: delete the pod May 26 21:59:45.748: INFO: Waiting for pod downwardapi-volume-c61c4ac9-0bd4-4024-ac52-ff394d0b7744 to disappear May 26 21:59:45.752: INFO: Pod downwardapi-volume-c61c4ac9-0bd4-4024-ac52-ff394d0b7744 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 21:59:45.752: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-8832" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory limit [NodeConformance] [Conformance]","total":278,"completed":203,"skipped":3183,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 21:59:45.760: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pod-network-test STEP: Waiting for a default service account to be provisioned in namespace [It] should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Performing setup for networking test in namespace pod-network-test-5006 STEP: creating a selector STEP: Creating the service pods in kubernetes May 26 21:59:46.965: INFO: Waiting up to 10m0s for all (but 0) nodes to be schedulable STEP: Creating test pods May 26 22:00:07.245: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.220:8080/dial?request=hostname&protocol=http&host=10.244.1.165&port=8080&tries=1'] Namespace:pod-network-test-5006 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 22:00:07.245: INFO: >>> kubeConfig: /root/.kube/config I0526 22:00:07.282544 6 log.go:172] (0xc0019f6a50) (0xc0028aa640) Create stream I0526 22:00:07.282581 6 log.go:172] (0xc0019f6a50) (0xc0028aa640) Stream added, broadcasting: 1 I0526 22:00:07.285390 6 log.go:172] (0xc0019f6a50) Reply frame received for 1 I0526 22:00:07.285424 6 log.go:172] (0xc0019f6a50) (0xc002176000) Create stream I0526 22:00:07.285438 6 log.go:172] (0xc0019f6a50) (0xc002176000) Stream added, broadcasting: 3 I0526 22:00:07.286687 6 log.go:172] (0xc0019f6a50) Reply frame received for 3 I0526 22:00:07.286729 6 log.go:172] (0xc0019f6a50) (0xc0026da000) Create stream I0526 22:00:07.286746 6 log.go:172] (0xc0019f6a50) (0xc0026da000) Stream added, broadcasting: 5 I0526 22:00:07.287909 6 log.go:172] (0xc0019f6a50) Reply frame received for 5 I0526 22:00:07.367911 6 log.go:172] (0xc0019f6a50) Data frame received for 3 I0526 22:00:07.367940 6 log.go:172] (0xc002176000) (3) Data frame handling I0526 22:00:07.367962 6 log.go:172] (0xc002176000) (3) Data frame sent I0526 22:00:07.368850 6 log.go:172] (0xc0019f6a50) Data frame received for 3 I0526 22:00:07.368877 6 log.go:172] (0xc002176000) (3) Data frame handling I0526 22:00:07.368967 6 log.go:172] (0xc0019f6a50) Data frame received for 5 I0526 22:00:07.369022 6 log.go:172] (0xc0026da000) (5) Data frame handling I0526 22:00:07.370910 6 log.go:172] (0xc0019f6a50) Data frame received for 1 I0526 22:00:07.370952 6 log.go:172] (0xc0028aa640) (1) Data frame handling I0526 22:00:07.370983 6 log.go:172] (0xc0028aa640) (1) Data frame sent I0526 22:00:07.371011 6 log.go:172] (0xc0019f6a50) (0xc0028aa640) Stream removed, broadcasting: 1 I0526 22:00:07.371038 6 log.go:172] (0xc0019f6a50) Go away received I0526 22:00:07.371075 6 log.go:172] (0xc0019f6a50) (0xc0028aa640) Stream removed, broadcasting: 1 I0526 22:00:07.371098 6 log.go:172] (0xc0019f6a50) (0xc002176000) Stream removed, broadcasting: 3 I0526 22:00:07.371112 6 log.go:172] (0xc0019f6a50) (0xc0026da000) Stream removed, broadcasting: 5 May 26 22:00:07.371: INFO: Waiting for responses: map[] May 26 22:00:07.374: INFO: ExecWithOptions {Command:[/bin/sh -c curl -g -q -s 'http://10.244.2.220:8080/dial?request=hostname&protocol=http&host=10.244.2.219&port=8080&tries=1'] Namespace:pod-network-test-5006 PodName:host-test-container-pod ContainerName:agnhost Stdin: CaptureStdout:true CaptureStderr:true PreserveWhitespace:false} May 26 22:00:07.374: INFO: >>> kubeConfig: /root/.kube/config I0526 22:00:07.407095 6 log.go:172] (0xc001af4840) (0xc0022fdae0) Create stream I0526 22:00:07.407137 6 log.go:172] (0xc001af4840) (0xc0022fdae0) Stream added, broadcasting: 1 I0526 22:00:07.409024 6 log.go:172] (0xc001af4840) Reply frame received for 1 I0526 22:00:07.409070 6 log.go:172] (0xc001af4840) (0xc0022fdb80) Create stream I0526 22:00:07.409084 6 log.go:172] (0xc001af4840) (0xc0022fdb80) Stream added, broadcasting: 3 I0526 22:00:07.410080 6 log.go:172] (0xc001af4840) Reply frame received for 3 I0526 22:00:07.410114 6 log.go:172] (0xc001af4840) (0xc0022fdd60) Create stream I0526 22:00:07.410125 6 log.go:172] (0xc001af4840) (0xc0022fdd60) Stream added, broadcasting: 5 I0526 22:00:07.410911 6 log.go:172] (0xc001af4840) Reply frame received for 5 I0526 22:00:07.534693 6 log.go:172] (0xc001af4840) Data frame received for 3 I0526 22:00:07.534719 6 log.go:172] (0xc0022fdb80) (3) Data frame handling I0526 22:00:07.534731 6 log.go:172] (0xc0022fdb80) (3) Data frame sent I0526 22:00:07.535195 6 log.go:172] (0xc001af4840) Data frame received for 5 I0526 22:00:07.535225 6 log.go:172] (0xc0022fdd60) (5) Data frame handling I0526 22:00:07.535268 6 log.go:172] (0xc001af4840) Data frame received for 3 I0526 22:00:07.535298 6 log.go:172] (0xc0022fdb80) (3) Data frame handling I0526 22:00:07.536520 6 log.go:172] (0xc001af4840) Data frame received for 1 I0526 22:00:07.536557 6 log.go:172] (0xc0022fdae0) (1) Data frame handling I0526 22:00:07.536589 6 log.go:172] (0xc0022fdae0) (1) Data frame sent I0526 22:00:07.536651 6 log.go:172] (0xc001af4840) (0xc0022fdae0) Stream removed, broadcasting: 1 I0526 22:00:07.536783 6 log.go:172] (0xc001af4840) Go away received I0526 22:00:07.536870 6 log.go:172] (0xc001af4840) (0xc0022fdae0) Stream removed, broadcasting: 1 I0526 22:00:07.536909 6 log.go:172] (0xc001af4840) (0xc0022fdb80) Stream removed, broadcasting: 3 I0526 22:00:07.536929 6 log.go:172] (0xc001af4840) (0xc0022fdd60) Stream removed, broadcasting: 5 May 26 22:00:07.536: INFO: Waiting for responses: map[] [AfterEach] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:00:07.537: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pod-network-test-5006" for this suite. • [SLOW TEST:21.783 seconds] [sig-network] Networking /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:26 Granular Checks: Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/networking.go:29 should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Networking Granular Checks: Pods should function for intra-pod communication: http [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":204,"skipped":3213,"failed":0} SSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:00:07.543: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 22:00:07.672: INFO: Waiting up to 5m0s for pod "downwardapi-volume-2ae56aed-e303-4f66-bf08-5e4a0f9a2955" in namespace "projected-4850" to be "success or failure" May 26 22:00:07.681: INFO: Pod "downwardapi-volume-2ae56aed-e303-4f66-bf08-5e4a0f9a2955": Phase="Pending", Reason="", readiness=false. Elapsed: 9.302232ms May 26 22:00:09.687: INFO: Pod "downwardapi-volume-2ae56aed-e303-4f66-bf08-5e4a0f9a2955": Phase="Pending", Reason="", readiness=false. Elapsed: 2.014783077s May 26 22:00:11.691: INFO: Pod "downwardapi-volume-2ae56aed-e303-4f66-bf08-5e4a0f9a2955": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.019232113s STEP: Saw pod success May 26 22:00:11.691: INFO: Pod "downwardapi-volume-2ae56aed-e303-4f66-bf08-5e4a0f9a2955" satisfied condition "success or failure" May 26 22:00:11.702: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-2ae56aed-e303-4f66-bf08-5e4a0f9a2955 container client-container: STEP: delete the pod May 26 22:00:11.730: INFO: Waiting for pod downwardapi-volume-2ae56aed-e303-4f66-bf08-5e4a0f9a2955 to disappear May 26 22:00:11.735: INFO: Pod downwardapi-volume-2ae56aed-e303-4f66-bf08-5e4a0f9a2955 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:00:11.735: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-4850" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide node allocatable (cpu) as default cpu limit if the limit is not set [NodeConformance] [Conformance]","total":278,"completed":205,"skipped":3223,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:00:11.742: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 22:00:11.803: INFO: Waiting up to 5m0s for pod "downwardapi-volume-41c97cf9-1891-4a5d-a143-5ef8b0b5cb22" in namespace "downward-api-6440" to be "success or failure" May 26 22:00:11.856: INFO: Pod "downwardapi-volume-41c97cf9-1891-4a5d-a143-5ef8b0b5cb22": Phase="Pending", Reason="", readiness=false. Elapsed: 53.044214ms May 26 22:00:13.964: INFO: Pod "downwardapi-volume-41c97cf9-1891-4a5d-a143-5ef8b0b5cb22": Phase="Pending", Reason="", readiness=false. Elapsed: 2.161481629s May 26 22:00:15.976: INFO: Pod "downwardapi-volume-41c97cf9-1891-4a5d-a143-5ef8b0b5cb22": Phase="Pending", Reason="", readiness=false. Elapsed: 4.173107087s May 26 22:00:17.980: INFO: Pod "downwardapi-volume-41c97cf9-1891-4a5d-a143-5ef8b0b5cb22": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.177246054s STEP: Saw pod success May 26 22:00:17.980: INFO: Pod "downwardapi-volume-41c97cf9-1891-4a5d-a143-5ef8b0b5cb22" satisfied condition "success or failure" May 26 22:00:17.983: INFO: Trying to get logs from node jerma-worker pod downwardapi-volume-41c97cf9-1891-4a5d-a143-5ef8b0b5cb22 container client-container: STEP: delete the pod May 26 22:00:18.192: INFO: Waiting for pod downwardapi-volume-41c97cf9-1891-4a5d-a143-5ef8b0b5cb22 to disappear May 26 22:00:18.383: INFO: Pod downwardapi-volume-41c97cf9-1891-4a5d-a143-5ef8b0b5cb22 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:00:18.383: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-6440" for this suite. • [SLOW TEST:6.819 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":206,"skipped":3249,"failed":0} S ------------------------------ [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:00:18.561: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the initial replication controller May 26 22:00:19.055: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-6003' May 26 22:00:19.570: INFO: stderr: "" May 26 22:00:19.570: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 26 22:00:19.570: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6003' May 26 22:00:19.720: INFO: stderr: "" May 26 22:00:19.720: INFO: stdout: "update-demo-nautilus-49q96 update-demo-nautilus-ndhwj " May 26 22:00:19.721: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-49q96 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6003' May 26 22:00:19.824: INFO: stderr: "" May 26 22:00:19.824: INFO: stdout: "" May 26 22:00:19.824: INFO: update-demo-nautilus-49q96 is created but not running May 26 22:00:24.825: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6003' May 26 22:00:24.931: INFO: stderr: "" May 26 22:00:24.931: INFO: stdout: "update-demo-nautilus-49q96 update-demo-nautilus-ndhwj " May 26 22:00:24.931: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-49q96 -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6003' May 26 22:00:25.028: INFO: stderr: "" May 26 22:00:25.028: INFO: stdout: "true" May 26 22:00:25.028: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-49q96 -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6003' May 26 22:00:25.122: INFO: stderr: "" May 26 22:00:25.123: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 22:00:25.123: INFO: validating pod update-demo-nautilus-49q96 May 26 22:00:25.127: INFO: got data: { "image": "nautilus.jpg" } May 26 22:00:25.127: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 22:00:25.127: INFO: update-demo-nautilus-49q96 is verified up and running May 26 22:00:25.127: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ndhwj -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6003' May 26 22:00:25.227: INFO: stderr: "" May 26 22:00:25.227: INFO: stdout: "true" May 26 22:00:25.227: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ndhwj -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6003' May 26 22:00:25.312: INFO: stderr: "" May 26 22:00:25.312: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 22:00:25.312: INFO: validating pod update-demo-nautilus-ndhwj May 26 22:00:25.315: INFO: got data: { "image": "nautilus.jpg" } May 26 22:00:25.316: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 22:00:25.316: INFO: update-demo-nautilus-ndhwj is verified up and running STEP: rolling-update to new replication controller May 26 22:00:25.318: INFO: scanned /root for discovery docs: May 26 22:00:25.318: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config rolling-update update-demo-nautilus --update-period=1s -f - --namespace=kubectl-6003' May 26 22:00:48.013: INFO: stderr: "Command \"rolling-update\" is deprecated, use \"rollout\" instead\n" May 26 22:00:48.014: INFO: stdout: "Created update-demo-kitten\nScaling up update-demo-kitten from 0 to 2, scaling down update-demo-nautilus from 2 to 0 (keep 2 pods available, don't exceed 3 pods)\nScaling update-demo-kitten up to 1\nScaling update-demo-nautilus down to 1\nScaling update-demo-kitten up to 2\nScaling update-demo-nautilus down to 0\nUpdate succeeded. Deleting old controller: update-demo-nautilus\nRenaming update-demo-kitten to update-demo-nautilus\nreplicationcontroller/update-demo-nautilus rolling updated\n" STEP: waiting for all containers in name=update-demo pods to come up. May 26 22:00:48.014: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-6003' May 26 22:00:48.118: INFO: stderr: "" May 26 22:00:48.119: INFO: stdout: "update-demo-kitten-h64sm update-demo-kitten-smrjd " May 26 22:00:48.119: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h64sm -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6003' May 26 22:00:48.228: INFO: stderr: "" May 26 22:00:48.228: INFO: stdout: "true" May 26 22:00:48.228: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-h64sm -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6003' May 26 22:00:48.331: INFO: stderr: "" May 26 22:00:48.331: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 26 22:00:48.331: INFO: validating pod update-demo-kitten-h64sm May 26 22:00:48.346: INFO: got data: { "image": "kitten.jpg" } May 26 22:00:48.346: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 26 22:00:48.346: INFO: update-demo-kitten-h64sm is verified up and running May 26 22:00:48.346: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-smrjd -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-6003' May 26 22:00:48.453: INFO: stderr: "" May 26 22:00:48.453: INFO: stdout: "true" May 26 22:00:48.453: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-kitten-smrjd -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-6003' May 26 22:00:48.550: INFO: stderr: "" May 26 22:00:48.550: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/kitten:1.0" May 26 22:00:48.550: INFO: validating pod update-demo-kitten-smrjd May 26 22:00:48.585: INFO: got data: { "image": "kitten.jpg" } May 26 22:00:48.585: INFO: Unmarshalled json jpg/img => {kitten.jpg} , expecting kitten.jpg . May 26 22:00:48.585: INFO: update-demo-kitten-smrjd is verified up and running [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:00:48.585: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-6003" for this suite. • [SLOW TEST:30.033 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should do a rolling update of a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should do a rolling update of a replication controller [Conformance]","total":278,"completed":207,"skipped":3250,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:00:48.595: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-531ff42b-a6af-4635-9436-4c7b702ae016 STEP: Creating a pod to test consume secrets May 26 22:00:48.773: INFO: Waiting up to 5m0s for pod "pod-secrets-269880b2-ed79-4ee3-9cd7-65036bf1bbb0" in namespace "secrets-4659" to be "success or failure" May 26 22:00:48.807: INFO: Pod "pod-secrets-269880b2-ed79-4ee3-9cd7-65036bf1bbb0": Phase="Pending", Reason="", readiness=false. Elapsed: 33.53264ms May 26 22:00:51.018: INFO: Pod "pod-secrets-269880b2-ed79-4ee3-9cd7-65036bf1bbb0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.245213319s May 26 22:00:53.023: INFO: Pod "pod-secrets-269880b2-ed79-4ee3-9cd7-65036bf1bbb0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.249612524s STEP: Saw pod success May 26 22:00:53.023: INFO: Pod "pod-secrets-269880b2-ed79-4ee3-9cd7-65036bf1bbb0" satisfied condition "success or failure" May 26 22:00:53.026: INFO: Trying to get logs from node jerma-worker pod pod-secrets-269880b2-ed79-4ee3-9cd7-65036bf1bbb0 container secret-volume-test: STEP: delete the pod May 26 22:00:53.060: INFO: Waiting for pod pod-secrets-269880b2-ed79-4ee3-9cd7-65036bf1bbb0 to disappear May 26 22:00:53.090: INFO: Pod pod-secrets-269880b2-ed79-4ee3-9cd7-65036bf1bbb0 no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:00:53.090: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-4659" for this suite. STEP: Destroying namespace "secret-namespace-5693" for this suite. •{"msg":"PASSED [sig-storage] Secrets should be able to mount in a volume regardless of a different secret existing with same name in different namespace [NodeConformance] [Conformance]","total":278,"completed":208,"skipped":3259,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:00:53.138: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicaSet STEP: Ensuring resource quota status captures replicaset creation STEP: Deleting a ReplicaSet STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:01:04.255: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5224" for this suite. • [SLOW TEST:11.126 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replica set. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replica set. [Conformance]","total":278,"completed":209,"skipped":3274,"failed":0} SSSSSSS ------------------------------ [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:01:04.265: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename init-container STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/init_container.go:153 [It] should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod May 26 22:01:04.339: INFO: PodSpec: initContainers in spec.initContainers May 26 22:01:58.462: INFO: init container has failed twice: &v1.Pod{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"pod-init-000cb83e-7de4-4939-8011-a1e9c93d3c5f", GenerateName:"", Namespace:"init-container-9787", SelfLink:"/api/v1/namespaces/init-container-9787/pods/pod-init-000cb83e-7de4-4939-8011-a1e9c93d3c5f", UID:"db2ce614-ac09-44bb-af48-7b4b14969486", ResourceVersion:"19391447", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63726127264, loc:(*time.Location)(0x78ee0c0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"name":"foo", "time":"339000763"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"default-token-zvrhl", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(0xc001aa0680), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container{v1.Container{Name:"init1", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/false"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zvrhl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}, v1.Container{Name:"init2", Image:"docker.io/library/busybox:1.29", Command:[]string{"/bin/true"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zvrhl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, Containers:[]v1.Container{v1.Container{Name:"run1", Image:"k8s.gcr.io/pause:3.1", Command:[]string(nil), Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(nil), Env:[]v1.EnvVar(nil), Resources:v1.ResourceRequirements{Limits:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}, Requests:v1.ResourceList{"cpu":resource.Quantity{i:resource.int64Amount{value:100, scale:-3}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"100m", Format:"DecimalSI"}}}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"default-token-zvrhl", ReadOnly:true, MountPath:"/var/run/secrets/kubernetes.io/serviceaccount", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContext)(nil), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc005556838), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string(nil), ServiceAccountName:"default", DeprecatedServiceAccount:"default", AutomountServiceAccountToken:(*bool)(nil), NodeName:"jerma-worker", HostNetwork:false, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc00220db60), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"node.kubernetes.io/not-ready", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0055568c0)}, v1.Toleration{Key:"node.kubernetes.io/unreachable", Operator:"Exists", Value:"", Effect:"NoExecute", TolerationSeconds:(*int64)(0xc0055568e0)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"", Priority:(*int32)(0xc0055568e8), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(0xc0055568ec), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}, Status:v1.PodStatus{Phase:"Pending", Conditions:[]v1.PodCondition{v1.PodCondition{Type:"Initialized", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127264, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotInitialized", Message:"containers with incomplete status: [init1 init2]"}, v1.PodCondition{Type:"Ready", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127264, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"ContainersReady", Status:"False", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127264, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ContainersNotReady", Message:"containers with unready status: [run1]"}, v1.PodCondition{Type:"PodScheduled", Status:"True", LastProbeTime:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127264, loc:(*time.Location)(0x78ee0c0)}}, Reason:"", Message:""}}, Message:"", Reason:"", NominatedNodeName:"", HostIP:"172.17.0.10", PodIP:"10.244.1.171", PodIPs:[]v1.PodIP{v1.PodIP{IP:"10.244.1.171"}}, StartTime:(*v1.Time)(0xc001e7d3e0), InitContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"init1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002743dc0)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(0xc002743e30)}, Ready:false, RestartCount:3, Image:"docker.io/library/busybox:1.29", ImageID:"docker.io/library/busybox@sha256:8ccbac733d19c0dd4d70b4f0c1e12245b5fa3ad24758a11035ee505c629c0796", ContainerID:"containerd://d6aa6ce29be26c9fc37203a37503670bba243616116ba7b140e4abb78e548681", Started:(*bool)(nil)}, v1.ContainerStatus{Name:"init2", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001e7d420), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"docker.io/library/busybox:1.29", ImageID:"", ContainerID:"", Started:(*bool)(nil)}}, ContainerStatuses:[]v1.ContainerStatus{v1.ContainerStatus{Name:"run1", State:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(0xc001e7d400), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, LastTerminationState:v1.ContainerState{Waiting:(*v1.ContainerStateWaiting)(nil), Running:(*v1.ContainerStateRunning)(nil), Terminated:(*v1.ContainerStateTerminated)(nil)}, Ready:false, RestartCount:0, Image:"k8s.gcr.io/pause:3.1", ImageID:"", ContainerID:"", Started:(*bool)(0xc00555696f)}}, QOSClass:"Burstable", EphemeralContainerStatuses:[]v1.ContainerStatus(nil)}} [AfterEach] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:01:58.463: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "init-container-9787" for this suite. • [SLOW TEST:54.273 seconds] [k8s.io] InitContainer [NodeConformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should not start app containers if init containers fail on a RestartAlways pod [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] InitContainer [NodeConformance] should not start app containers if init containers fail on a RestartAlways pod [Conformance]","total":278,"completed":210,"skipped":3281,"failed":0} S ------------------------------ [k8s.io] Pods should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:01:58.539: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename pods STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/pods.go:177 [It] should be updated [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: updating the pod May 26 22:02:03.160: INFO: Successfully updated pod "pod-update-8b5cb186-c191-426e-a84f-3b840246328c" STEP: verifying the updated pod is in kubernetes May 26 22:02:03.171: INFO: Pod update OK [AfterEach] [k8s.io] Pods /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:02:03.171: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "pods-2794" for this suite. •{"msg":"PASSED [k8s.io] Pods should be updated [NodeConformance] [Conformance]","total":278,"completed":211,"skipped":3282,"failed":0} SSS ------------------------------ [sig-network] DNS should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:02:03.177: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test headless service STEP: Running these commands on wheezy: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9022.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_udp@dns-test-service.dns-9022.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9022.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@dns-test-service.dns-9022.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9022.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9022.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9022.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_udp@_http._tcp.test-service-2.dns-9022.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9022.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/wheezy_tcp@_http._tcp.test-service-2.dns-9022.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9022.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/wheezy_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 232.96.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.96.232_udp@PTR;check="$$(dig +tcp +noall +answer +search 232.96.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.96.232_tcp@PTR;sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 600`; do check="$$(dig +notcp +noall +answer +search dns-test-service.dns-9022.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_udp@dns-test-service.dns-9022.svc.cluster.local;check="$$(dig +tcp +noall +answer +search dns-test-service.dns-9022.svc.cluster.local A)" && test -n "$$check" && echo OK > /results/jessie_tcp@dns-test-service.dns-9022.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.dns-test-service.dns-9022.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.dns-test-service.dns-9022.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local;check="$$(dig +notcp +noall +answer +search _http._tcp.test-service-2.dns-9022.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_udp@_http._tcp.test-service-2.dns-9022.svc.cluster.local;check="$$(dig +tcp +noall +answer +search _http._tcp.test-service-2.dns-9022.svc.cluster.local SRV)" && test -n "$$check" && echo OK > /results/jessie_tcp@_http._tcp.test-service-2.dns-9022.svc.cluster.local;podARec=$$(hostname -i| awk -F. '{print $$1"-"$$2"-"$$3"-"$$4".dns-9022.pod.cluster.local"}');check="$$(dig +notcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_udp@PodARecord;check="$$(dig +tcp +noall +answer +search $${podARec} A)" && test -n "$$check" && echo OK > /results/jessie_tcp@PodARecord;check="$$(dig +notcp +noall +answer +search 232.96.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.96.232_udp@PTR;check="$$(dig +tcp +noall +answer +search 232.96.97.10.in-addr.arpa. PTR)" && test -n "$$check" && echo OK > /results/10.97.96.232_tcp@PTR;sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 26 22:02:09.746: INFO: Unable to read wheezy_udp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:09.750: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:09.753: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:09.755: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:09.790: INFO: Unable to read jessie_udp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:09.794: INFO: Unable to read jessie_tcp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:09.796: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:09.798: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:09.837: INFO: Lookups using dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8 failed for: [wheezy_udp@dns-test-service.dns-9022.svc.cluster.local wheezy_tcp@dns-test-service.dns-9022.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local jessie_udp@dns-test-service.dns-9022.svc.cluster.local jessie_tcp@dns-test-service.dns-9022.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local] May 26 22:02:14.842: INFO: Unable to read wheezy_udp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:14.845: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:14.848: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:14.851: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:14.872: INFO: Unable to read jessie_udp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:14.874: INFO: Unable to read jessie_tcp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:14.877: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:14.880: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:14.894: INFO: Lookups using dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8 failed for: [wheezy_udp@dns-test-service.dns-9022.svc.cluster.local wheezy_tcp@dns-test-service.dns-9022.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local jessie_udp@dns-test-service.dns-9022.svc.cluster.local jessie_tcp@dns-test-service.dns-9022.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local] May 26 22:02:19.843: INFO: Unable to read wheezy_udp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:19.846: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:19.849: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:19.852: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:19.872: INFO: Unable to read jessie_udp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:19.875: INFO: Unable to read jessie_tcp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:19.878: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:19.881: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:19.899: INFO: Lookups using dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8 failed for: [wheezy_udp@dns-test-service.dns-9022.svc.cluster.local wheezy_tcp@dns-test-service.dns-9022.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local jessie_udp@dns-test-service.dns-9022.svc.cluster.local jessie_tcp@dns-test-service.dns-9022.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local] May 26 22:02:24.843: INFO: Unable to read wheezy_udp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:24.847: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:24.851: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:24.854: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:24.875: INFO: Unable to read jessie_udp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:24.877: INFO: Unable to read jessie_tcp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:24.879: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:24.881: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:24.896: INFO: Lookups using dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8 failed for: [wheezy_udp@dns-test-service.dns-9022.svc.cluster.local wheezy_tcp@dns-test-service.dns-9022.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local jessie_udp@dns-test-service.dns-9022.svc.cluster.local jessie_tcp@dns-test-service.dns-9022.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local] May 26 22:02:29.843: INFO: Unable to read wheezy_udp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:29.847: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:29.849: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:29.851: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:29.869: INFO: Unable to read jessie_udp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:29.871: INFO: Unable to read jessie_tcp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:29.873: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:29.876: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:29.891: INFO: Lookups using dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8 failed for: [wheezy_udp@dns-test-service.dns-9022.svc.cluster.local wheezy_tcp@dns-test-service.dns-9022.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local jessie_udp@dns-test-service.dns-9022.svc.cluster.local jessie_tcp@dns-test-service.dns-9022.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local] May 26 22:02:34.842: INFO: Unable to read wheezy_udp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:34.845: INFO: Unable to read wheezy_tcp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:34.849: INFO: Unable to read wheezy_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:34.852: INFO: Unable to read wheezy_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:34.874: INFO: Unable to read jessie_udp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:34.876: INFO: Unable to read jessie_tcp@dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:34.878: INFO: Unable to read jessie_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:34.881: INFO: Unable to read jessie_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local from pod dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8: the server could not find the requested resource (get pods dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8) May 26 22:02:34.896: INFO: Lookups using dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8 failed for: [wheezy_udp@dns-test-service.dns-9022.svc.cluster.local wheezy_tcp@dns-test-service.dns-9022.svc.cluster.local wheezy_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local wheezy_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local jessie_udp@dns-test-service.dns-9022.svc.cluster.local jessie_tcp@dns-test-service.dns-9022.svc.cluster.local jessie_udp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local jessie_tcp@_http._tcp.dns-test-service.dns-9022.svc.cluster.local] May 26 22:02:39.964: INFO: DNS probes using dns-9022/dns-test-e1da6a26-7571-4f37-baae-eb7d3b7f28d8 succeeded STEP: deleting the pod STEP: deleting the test service STEP: deleting the test headless service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:02:42.485: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-9022" for this suite. • [SLOW TEST:39.357 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for services [Conformance]","total":278,"completed":212,"skipped":3285,"failed":0} SSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:02:42.535: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on tmpfs May 26 22:02:42.831: INFO: Waiting up to 5m0s for pod "pod-0a7c3de8-0b27-4b9d-9cc6-c86e95ccdd81" in namespace "emptydir-9077" to be "success or failure" May 26 22:02:42.840: INFO: Pod "pod-0a7c3de8-0b27-4b9d-9cc6-c86e95ccdd81": Phase="Pending", Reason="", readiness=false. Elapsed: 9.481645ms May 26 22:02:44.846: INFO: Pod "pod-0a7c3de8-0b27-4b9d-9cc6-c86e95ccdd81": Phase="Pending", Reason="", readiness=false. Elapsed: 2.01463785s May 26 22:02:46.849: INFO: Pod "pod-0a7c3de8-0b27-4b9d-9cc6-c86e95ccdd81": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.018335713s STEP: Saw pod success May 26 22:02:46.849: INFO: Pod "pod-0a7c3de8-0b27-4b9d-9cc6-c86e95ccdd81" satisfied condition "success or failure" May 26 22:02:46.852: INFO: Trying to get logs from node jerma-worker pod pod-0a7c3de8-0b27-4b9d-9cc6-c86e95ccdd81 container test-container: STEP: delete the pod May 26 22:02:46.920: INFO: Waiting for pod pod-0a7c3de8-0b27-4b9d-9cc6-c86e95ccdd81 to disappear May 26 22:02:47.050: INFO: Pod pod-0a7c3de8-0b27-4b9d-9cc6-c86e95ccdd81 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:02:47.050: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-9077" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (root,0777,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":213,"skipped":3296,"failed":0} SSSS ------------------------------ [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:02:47.060: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 26 22:02:51.666: INFO: Successfully updated pod "labelsupdate6ad020b6-9a96-47d1-8ad2-0d3319424a84" [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:02:53.684: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-565" for this suite. • [SLOW TEST:6.633 seconds] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:35 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Downward API volume should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":214,"skipped":3300,"failed":0} SSS ------------------------------ [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:02:53.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 26 22:02:53.801: INFO: Waiting up to 5m0s for pod "downward-api-23f1f2a8-0baf-4f41-a1de-1ce43b457f98" in namespace "downward-api-3528" to be "success or failure" May 26 22:02:53.817: INFO: Pod "downward-api-23f1f2a8-0baf-4f41-a1de-1ce43b457f98": Phase="Pending", Reason="", readiness=false. Elapsed: 16.374721ms May 26 22:02:55.821: INFO: Pod "downward-api-23f1f2a8-0baf-4f41-a1de-1ce43b457f98": Phase="Pending", Reason="", readiness=false. Elapsed: 2.019665519s May 26 22:02:57.826: INFO: Pod "downward-api-23f1f2a8-0baf-4f41-a1de-1ce43b457f98": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.024684723s STEP: Saw pod success May 26 22:02:57.826: INFO: Pod "downward-api-23f1f2a8-0baf-4f41-a1de-1ce43b457f98" satisfied condition "success or failure" May 26 22:02:57.829: INFO: Trying to get logs from node jerma-worker2 pod downward-api-23f1f2a8-0baf-4f41-a1de-1ce43b457f98 container dapi-container: STEP: delete the pod May 26 22:02:57.896: INFO: Waiting for pod downward-api-23f1f2a8-0baf-4f41-a1de-1ce43b457f98 to disappear May 26 22:02:58.014: INFO: Pod downward-api-23f1f2a8-0baf-4f41-a1de-1ce43b457f98 no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:02:58.014: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-3528" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide container's limits.cpu/memory and requests.cpu/memory as env vars [NodeConformance] [Conformance]","total":278,"completed":215,"skipped":3303,"failed":0} ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:02:58.041: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-projected-zwsf STEP: Creating a pod to test atomic-volume-subpath May 26 22:02:58.322: INFO: Waiting up to 5m0s for pod "pod-subpath-test-projected-zwsf" in namespace "subpath-821" to be "success or failure" May 26 22:02:58.350: INFO: Pod "pod-subpath-test-projected-zwsf": Phase="Pending", Reason="", readiness=false. Elapsed: 28.446452ms May 26 22:03:00.355: INFO: Pod "pod-subpath-test-projected-zwsf": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032898792s May 26 22:03:02.358: INFO: Pod "pod-subpath-test-projected-zwsf": Phase="Running", Reason="", readiness=true. Elapsed: 4.036282815s May 26 22:03:04.362: INFO: Pod "pod-subpath-test-projected-zwsf": Phase="Running", Reason="", readiness=true. Elapsed: 6.040439089s May 26 22:03:06.367: INFO: Pod "pod-subpath-test-projected-zwsf": Phase="Running", Reason="", readiness=true. Elapsed: 8.044845111s May 26 22:03:08.371: INFO: Pod "pod-subpath-test-projected-zwsf": Phase="Running", Reason="", readiness=true. Elapsed: 10.049139913s May 26 22:03:10.376: INFO: Pod "pod-subpath-test-projected-zwsf": Phase="Running", Reason="", readiness=true. Elapsed: 12.053467088s May 26 22:03:12.380: INFO: Pod "pod-subpath-test-projected-zwsf": Phase="Running", Reason="", readiness=true. Elapsed: 14.057827292s May 26 22:03:14.384: INFO: Pod "pod-subpath-test-projected-zwsf": Phase="Running", Reason="", readiness=true. Elapsed: 16.062157665s May 26 22:03:16.389: INFO: Pod "pod-subpath-test-projected-zwsf": Phase="Running", Reason="", readiness=true. Elapsed: 18.066777686s May 26 22:03:18.393: INFO: Pod "pod-subpath-test-projected-zwsf": Phase="Running", Reason="", readiness=true. Elapsed: 20.071130309s May 26 22:03:20.398: INFO: Pod "pod-subpath-test-projected-zwsf": Phase="Running", Reason="", readiness=true. Elapsed: 22.07595955s May 26 22:03:22.402: INFO: Pod "pod-subpath-test-projected-zwsf": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.080408384s STEP: Saw pod success May 26 22:03:22.403: INFO: Pod "pod-subpath-test-projected-zwsf" satisfied condition "success or failure" May 26 22:03:22.406: INFO: Trying to get logs from node jerma-worker pod pod-subpath-test-projected-zwsf container test-container-subpath-projected-zwsf: STEP: delete the pod May 26 22:03:22.430: INFO: Waiting for pod pod-subpath-test-projected-zwsf to disappear May 26 22:03:22.434: INFO: Pod pod-subpath-test-projected-zwsf no longer exists STEP: Deleting pod pod-subpath-test-projected-zwsf May 26 22:03:22.434: INFO: Deleting pod "pod-subpath-test-projected-zwsf" in namespace "subpath-821" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:03:22.436: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-821" for this suite. • [SLOW TEST:24.401 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with projected pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with projected pod [LinuxOnly] [Conformance]","total":278,"completed":216,"skipped":3303,"failed":0} SSSSSSSSSSSSS ------------------------------ [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:03:22.443: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename sched-pred STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:86 May 26 22:03:22.526: INFO: Waiting up to 1m0s for all (but 0) nodes to be ready May 26 22:03:22.543: INFO: Waiting for terminating namespaces to be deleted... May 26 22:03:22.546: INFO: Logging pods the kubelet thinks is on node jerma-worker before test May 26 22:03:22.551: INFO: kindnet-c5svj from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 22:03:22.551: INFO: Container kindnet-cni ready: true, restart count 2 May 26 22:03:22.551: INFO: kube-proxy-44mlz from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 22:03:22.551: INFO: Container kube-proxy ready: true, restart count 0 May 26 22:03:22.551: INFO: Logging pods the kubelet thinks is on node jerma-worker2 before test May 26 22:03:22.556: INFO: kindnet-zk6sq from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 22:03:22.556: INFO: Container kindnet-cni ready: true, restart count 2 May 26 22:03:22.556: INFO: kube-bench-hk6h6 from default started at 2020-03-26 15:21:52 +0000 UTC (1 container statuses recorded) May 26 22:03:22.556: INFO: Container kube-bench ready: false, restart count 0 May 26 22:03:22.556: INFO: kube-proxy-75q42 from kube-system started at 2020-03-15 18:26:33 +0000 UTC (1 container statuses recorded) May 26 22:03:22.556: INFO: Container kube-proxy ready: true, restart count 0 May 26 22:03:22.556: INFO: kube-hunter-8g6pb from default started at 2020-03-26 15:21:33 +0000 UTC (1 container statuses recorded) May 26 22:03:22.556: INFO: Container kube-hunter ready: false, restart count 0 [It] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Trying to launch a pod without a label to get a node which can launch it. STEP: Explicitly delete pod here to free the resource it takes. STEP: Trying to apply a random label on the found node. STEP: verifying the node has the label kubernetes.io/e2e-00edc298-3c9c-48a4-b736-77da0012cc53 95 STEP: Trying to create a pod(pod4) with hostport 54322 and hostIP 0.0.0.0(empty string here) and expect scheduled STEP: Trying to create another pod(pod5) with hostport 54322 but hostIP 127.0.0.1 on the node which pod4 resides and expect not scheduled STEP: removing the label kubernetes.io/e2e-00edc298-3c9c-48a4-b736-77da0012cc53 off the node jerma-worker2 STEP: verifying the node doesn't have the label kubernetes.io/e2e-00edc298-3c9c-48a4-b736-77da0012cc53 [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:08:30.797: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "sched-pred-6751" for this suite. [AfterEach] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/predicates.go:77 • [SLOW TEST:308.384 seconds] [sig-scheduling] SchedulerPredicates [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/scheduling/framework.go:40 validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-scheduling] SchedulerPredicates [Serial] validates that there exists conflict between pods with same hostPort and protocol but one using 0.0.0.0 hostIP [Conformance]","total":278,"completed":217,"skipped":3316,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:08:30.827: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support proxy with --port 0 [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: starting the proxy server May 26 22:08:30.880: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy -p 0 --disable-filter' STEP: curling proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:08:30.969: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2135" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support proxy with --port 0 [Conformance]","total":278,"completed":218,"skipped":3319,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:08:30.979: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-runtime STEP: Waiting for a default service account to be provisioned in namespace [It] should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the container STEP: wait for the container to reach Failed STEP: get the container status STEP: the container should be terminated STEP: the termination message should be set May 26 22:08:35.151: INFO: Expected: &{DONE} to match Container's Termination Message: DONE -- STEP: delete the container [AfterEach] [k8s.io] Container Runtime /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:08:35.242: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-runtime-1295" for this suite. •{"msg":"PASSED [k8s.io] Container Runtime blackbox test on terminated container should report termination message [LinuxOnly] from log output if TerminationMessagePolicy FallbackToLogsOnError is set [NodeConformance] [Conformance]","total":278,"completed":219,"skipped":3325,"failed":0} SSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:08:35.249: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replicaset STEP: Waiting for a default service account to be provisioned in namespace [It] should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a Pod with a 'name' label pod-adoption-release is created STEP: When a replicaset with a matching selector is created STEP: Then the orphan pod is adopted STEP: When the matched label of one of its pods change May 26 22:08:42.342: INFO: Pod name pod-adoption-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:08:43.362: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replicaset-4464" for this suite. • [SLOW TEST:8.122 seconds] [sig-apps] ReplicaSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should adopt matching pods on creation and release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicaSet should adopt matching pods on creation and release no longer matching pods [Conformance]","total":278,"completed":220,"skipped":3347,"failed":0} SSSSSSSSSSSSS ------------------------------ [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:08:43.372: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename events STEP: Waiting for a default service account to be provisioned in namespace [It] should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating the pod STEP: submitting the pod to kubernetes STEP: verifying the pod is in kubernetes STEP: retrieving the pod May 26 22:08:47.510: INFO: &Pod{ObjectMeta:{send-events-70198e25-6667-4fab-ab66-baa0552579ed events-2351 /api/v1/namespaces/events-2351/pods/send-events-70198e25-6667-4fab-ab66-baa0552579ed 3483fc09-fb49-49d1-a822-0108986c6d2d 19392953 0 2020-05-26 22:08:43 +0000 UTC map[name:foo time:431910853] map[] [] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-98256,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-98256,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:p,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[serve-hostname],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:,HostPort:0,ContainerPort:80,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-98256,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*30,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker2,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 22:08:43 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 22:08:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 22:08:47 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 22:08:43 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.8,PodIP:10.244.2.228,StartTime:2020-05-26 22:08:43 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:p,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 22:08:46 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://9626004d910b5131674ddd353b378dedf4bb030d3184d879bb7a0caf70dc7dc1,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.2.228,},},EphemeralContainerStatuses:[]ContainerStatus{},},} STEP: checking for scheduler event about the pod May 26 22:08:49.515: INFO: Saw scheduler event for our pod. STEP: checking for kubelet event about the pod May 26 22:08:51.521: INFO: Saw kubelet event for our pod. STEP: deleting the pod [AfterEach] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:08:51.525: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "events-2351" for this suite. • [SLOW TEST:8.177 seconds] [k8s.io] [sig-node] Events /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should be sent by kubelets and the scheduler about pods scheduling and running [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] [sig-node] Events should be sent by kubelets and the scheduler about pods scheduling and running [Conformance]","total":278,"completed":221,"skipped":3360,"failed":0} SSSSSSS ------------------------------ [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:08:51.549: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename gc STEP: Waiting for a default service account to be provisioned in namespace [It] should delete RS created by deployment when not orphaning [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the deployment STEP: Wait for the Deployment to create new ReplicaSet STEP: delete the deployment STEP: wait for all rs to be garbage collected STEP: expected 0 rs, got 1 rs STEP: expected 0 pods, got 2 pods STEP: Gathering metrics W0526 22:08:52.689067 6 metrics_grabber.go:79] Master node is not registered. Grabbing metrics from Scheduler, ControllerManager and ClusterAutoscaler is disabled. May 26 22:08:52.689: INFO: For apiserver_request_total: For apiserver_request_latency_seconds: For apiserver_init_events_total: For garbage_collector_attempt_to_delete_queue_latency: For garbage_collector_attempt_to_delete_work_duration: For garbage_collector_attempt_to_orphan_queue_latency: For garbage_collector_attempt_to_orphan_work_duration: For garbage_collector_dirty_processing_latency_microseconds: For garbage_collector_event_processing_latency_microseconds: For garbage_collector_graph_changes_queue_latency: For garbage_collector_graph_changes_work_duration: For garbage_collector_orphan_processing_latency_microseconds: For namespace_queue_latency: For namespace_queue_latency_sum: For namespace_queue_latency_count: For namespace_retries: For namespace_work_duration: For namespace_work_duration_sum: For namespace_work_duration_count: For function_duration_seconds: For errors_total: For evicted_pods_total: [AfterEach] [sig-api-machinery] Garbage collector /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:08:52.689: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "gc-9896" for this suite. •{"msg":"PASSED [sig-api-machinery] Garbage collector should delete RS created by deployment when not orphaning [Conformance]","total":278,"completed":222,"skipped":3367,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:08:52.696: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1790 [It] should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 26 22:08:52.843: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-pod --generator=run-pod/v1 --image=docker.io/library/httpd:2.4.38-alpine --labels=run=e2e-test-httpd-pod --namespace=kubectl-1414' May 26 22:08:52.958: INFO: stderr: "" May 26 22:08:52.958: INFO: stdout: "pod/e2e-test-httpd-pod created\n" STEP: verifying the pod e2e-test-httpd-pod is running STEP: verifying the pod e2e-test-httpd-pod was created May 26 22:08:58.008: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pod e2e-test-httpd-pod --namespace=kubectl-1414 -o json' May 26 22:08:58.111: INFO: stderr: "" May 26 22:08:58.111: INFO: stdout: "{\n \"apiVersion\": \"v1\",\n \"kind\": \"Pod\",\n \"metadata\": {\n \"creationTimestamp\": \"2020-05-26T22:08:52Z\",\n \"labels\": {\n \"run\": \"e2e-test-httpd-pod\"\n },\n \"name\": \"e2e-test-httpd-pod\",\n \"namespace\": \"kubectl-1414\",\n \"resourceVersion\": \"19393038\",\n \"selfLink\": \"/api/v1/namespaces/kubectl-1414/pods/e2e-test-httpd-pod\",\n \"uid\": \"9d9d408b-538c-48a3-8396-7dd2848c5c66\"\n },\n \"spec\": {\n \"containers\": [\n {\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imagePullPolicy\": \"IfNotPresent\",\n \"name\": \"e2e-test-httpd-pod\",\n \"resources\": {},\n \"terminationMessagePath\": \"/dev/termination-log\",\n \"terminationMessagePolicy\": \"File\",\n \"volumeMounts\": [\n {\n \"mountPath\": \"/var/run/secrets/kubernetes.io/serviceaccount\",\n \"name\": \"default-token-l52dm\",\n \"readOnly\": true\n }\n ]\n }\n ],\n \"dnsPolicy\": \"ClusterFirst\",\n \"enableServiceLinks\": true,\n \"nodeName\": \"jerma-worker\",\n \"priority\": 0,\n \"restartPolicy\": \"Always\",\n \"schedulerName\": \"default-scheduler\",\n \"securityContext\": {},\n \"serviceAccount\": \"default\",\n \"serviceAccountName\": \"default\",\n \"terminationGracePeriodSeconds\": 30,\n \"tolerations\": [\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/not-ready\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n },\n {\n \"effect\": \"NoExecute\",\n \"key\": \"node.kubernetes.io/unreachable\",\n \"operator\": \"Exists\",\n \"tolerationSeconds\": 300\n }\n ],\n \"volumes\": [\n {\n \"name\": \"default-token-l52dm\",\n \"secret\": {\n \"defaultMode\": 420,\n \"secretName\": \"default-token-l52dm\"\n }\n }\n ]\n },\n \"status\": {\n \"conditions\": [\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-26T22:08:53Z\",\n \"status\": \"True\",\n \"type\": \"Initialized\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-26T22:08:56Z\",\n \"status\": \"True\",\n \"type\": \"Ready\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-26T22:08:56Z\",\n \"status\": \"True\",\n \"type\": \"ContainersReady\"\n },\n {\n \"lastProbeTime\": null,\n \"lastTransitionTime\": \"2020-05-26T22:08:52Z\",\n \"status\": \"True\",\n \"type\": \"PodScheduled\"\n }\n ],\n \"containerStatuses\": [\n {\n \"containerID\": \"containerd://f591be1df9527f81c901bcb04504b89c04d2e7ec9ba4e8ca4b8dd0e6333dec6e\",\n \"image\": \"docker.io/library/httpd:2.4.38-alpine\",\n \"imageID\": \"docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060\",\n \"lastState\": {},\n \"name\": \"e2e-test-httpd-pod\",\n \"ready\": true,\n \"restartCount\": 0,\n \"started\": true,\n \"state\": {\n \"running\": {\n \"startedAt\": \"2020-05-26T22:08:55Z\"\n }\n }\n }\n ],\n \"hostIP\": \"172.17.0.10\",\n \"phase\": \"Running\",\n \"podIP\": \"10.244.1.179\",\n \"podIPs\": [\n {\n \"ip\": \"10.244.1.179\"\n }\n ],\n \"qosClass\": \"BestEffort\",\n \"startTime\": \"2020-05-26T22:08:53Z\"\n }\n}\n" STEP: replace the image in the pod May 26 22:08:58.111: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config replace -f - --namespace=kubectl-1414' May 26 22:08:59.076: INFO: stderr: "" May 26 22:08:59.076: INFO: stdout: "pod/e2e-test-httpd-pod replaced\n" STEP: verifying the pod e2e-test-httpd-pod has the right image docker.io/library/busybox:1.29 [AfterEach] Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1795 May 26 22:08:59.115: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete pods e2e-test-httpd-pod --namespace=kubectl-1414' May 26 22:09:09.268: INFO: stderr: "" May 26 22:09:09.268: INFO: stdout: "pod \"e2e-test-httpd-pod\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:09:09.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1414" for this suite. • [SLOW TEST:16.583 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl replace /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1786 should update a single-container pod's image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl replace should update a single-container pod's image [Conformance]","total":278,"completed":223,"skipped":3395,"failed":0} SSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:09:09.280: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: CRs in the same group but different versions (one multiversion CRD) show up in OpenAPI documentation May 26 22:09:09.327: INFO: >>> kubeConfig: /root/.kube/config STEP: CRs in the same group but different versions (two CRDs) show up in OpenAPI documentation May 26 22:09:19.686: INFO: >>> kubeConfig: /root/.kube/config May 26 22:09:21.596: INFO: >>> kubeConfig: /root/.kube/config [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:09:32.091: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-8565" for this suite. • [SLOW TEST:22.818 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for multiple CRDs of same group but different versions [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for multiple CRDs of same group but different versions [Conformance]","total":278,"completed":224,"skipped":3415,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] DNS should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:09:32.098: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename dns STEP: Waiting for a default service account to be provisioned in namespace [It] should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a test externalName service STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2551.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2551.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2551.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2551.svc.cluster.local; sleep 1; done STEP: creating a pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 26 22:09:38.284: INFO: DNS probes using dns-test-7973df9c-ce75-437f-9888-fa8c5c5d8b7d succeeded STEP: deleting the pod STEP: changing the externalName to bar.example.com STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2551.svc.cluster.local CNAME > /results/wheezy_udp@dns-test-service-3.dns-2551.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2551.svc.cluster.local CNAME > /results/jessie_udp@dns-test-service-3.dns-2551.svc.cluster.local; sleep 1; done STEP: creating a second pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 26 22:09:44.436: INFO: File wheezy_udp@dns-test-service-3.dns-2551.svc.cluster.local from pod dns-2551/dns-test-dfe1d7b4-f932-4b8c-aa28-59de072de6b0 contains 'foo.example.com. ' instead of 'bar.example.com.' May 26 22:09:44.440: INFO: File jessie_udp@dns-test-service-3.dns-2551.svc.cluster.local from pod dns-2551/dns-test-dfe1d7b4-f932-4b8c-aa28-59de072de6b0 contains 'foo.example.com. ' instead of 'bar.example.com.' May 26 22:09:44.440: INFO: Lookups using dns-2551/dns-test-dfe1d7b4-f932-4b8c-aa28-59de072de6b0 failed for: [wheezy_udp@dns-test-service-3.dns-2551.svc.cluster.local jessie_udp@dns-test-service-3.dns-2551.svc.cluster.local] May 26 22:09:49.445: INFO: File wheezy_udp@dns-test-service-3.dns-2551.svc.cluster.local from pod dns-2551/dns-test-dfe1d7b4-f932-4b8c-aa28-59de072de6b0 contains 'foo.example.com. ' instead of 'bar.example.com.' May 26 22:09:49.450: INFO: File jessie_udp@dns-test-service-3.dns-2551.svc.cluster.local from pod dns-2551/dns-test-dfe1d7b4-f932-4b8c-aa28-59de072de6b0 contains 'foo.example.com. ' instead of 'bar.example.com.' May 26 22:09:49.450: INFO: Lookups using dns-2551/dns-test-dfe1d7b4-f932-4b8c-aa28-59de072de6b0 failed for: [wheezy_udp@dns-test-service-3.dns-2551.svc.cluster.local jessie_udp@dns-test-service-3.dns-2551.svc.cluster.local] May 26 22:09:54.445: INFO: File wheezy_udp@dns-test-service-3.dns-2551.svc.cluster.local from pod dns-2551/dns-test-dfe1d7b4-f932-4b8c-aa28-59de072de6b0 contains 'foo.example.com. ' instead of 'bar.example.com.' May 26 22:09:54.449: INFO: File jessie_udp@dns-test-service-3.dns-2551.svc.cluster.local from pod dns-2551/dns-test-dfe1d7b4-f932-4b8c-aa28-59de072de6b0 contains 'foo.example.com. ' instead of 'bar.example.com.' May 26 22:09:54.449: INFO: Lookups using dns-2551/dns-test-dfe1d7b4-f932-4b8c-aa28-59de072de6b0 failed for: [wheezy_udp@dns-test-service-3.dns-2551.svc.cluster.local jessie_udp@dns-test-service-3.dns-2551.svc.cluster.local] May 26 22:09:59.445: INFO: File wheezy_udp@dns-test-service-3.dns-2551.svc.cluster.local from pod dns-2551/dns-test-dfe1d7b4-f932-4b8c-aa28-59de072de6b0 contains 'foo.example.com. ' instead of 'bar.example.com.' May 26 22:09:59.449: INFO: File jessie_udp@dns-test-service-3.dns-2551.svc.cluster.local from pod dns-2551/dns-test-dfe1d7b4-f932-4b8c-aa28-59de072de6b0 contains 'foo.example.com. ' instead of 'bar.example.com.' May 26 22:09:59.449: INFO: Lookups using dns-2551/dns-test-dfe1d7b4-f932-4b8c-aa28-59de072de6b0 failed for: [wheezy_udp@dns-test-service-3.dns-2551.svc.cluster.local jessie_udp@dns-test-service-3.dns-2551.svc.cluster.local] May 26 22:10:04.445: INFO: File wheezy_udp@dns-test-service-3.dns-2551.svc.cluster.local from pod dns-2551/dns-test-dfe1d7b4-f932-4b8c-aa28-59de072de6b0 contains 'foo.example.com. ' instead of 'bar.example.com.' May 26 22:10:04.449: INFO: File jessie_udp@dns-test-service-3.dns-2551.svc.cluster.local from pod dns-2551/dns-test-dfe1d7b4-f932-4b8c-aa28-59de072de6b0 contains 'foo.example.com. ' instead of 'bar.example.com.' May 26 22:10:04.449: INFO: Lookups using dns-2551/dns-test-dfe1d7b4-f932-4b8c-aa28-59de072de6b0 failed for: [wheezy_udp@dns-test-service-3.dns-2551.svc.cluster.local jessie_udp@dns-test-service-3.dns-2551.svc.cluster.local] May 26 22:10:09.447: INFO: DNS probes using dns-test-dfe1d7b4-f932-4b8c-aa28-59de072de6b0 succeeded STEP: deleting the pod STEP: changing the service to type=ClusterIP STEP: Running these commands on wheezy: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2551.svc.cluster.local A > /results/wheezy_udp@dns-test-service-3.dns-2551.svc.cluster.local; sleep 1; done STEP: Running these commands on jessie: for i in `seq 1 30`; do dig +short dns-test-service-3.dns-2551.svc.cluster.local A > /results/jessie_udp@dns-test-service-3.dns-2551.svc.cluster.local; sleep 1; done STEP: creating a third pod to probe DNS STEP: submitting the pod to kubernetes STEP: retrieving the pod STEP: looking for the results for each expected name from probers May 26 22:10:16.073: INFO: DNS probes using dns-test-42231d2e-25aa-456b-84ba-5bb7bc75b56e succeeded STEP: deleting the pod STEP: deleting the test externalName service [AfterEach] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:10:16.166: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "dns-2551" for this suite. • [SLOW TEST:44.093 seconds] [sig-network] DNS /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should provide DNS for ExternalName services [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] DNS should provide DNS for ExternalName services [Conformance]","total":278,"completed":225,"skipped":3433,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:10:16.193: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename watch STEP: Waiting for a default service account to be provisioned in namespace [It] should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a watch on configmaps with label A STEP: creating a watch on configmaps with label B STEP: creating a watch on configmaps with label A or B STEP: creating a configmap with label A and ensuring the correct watchers observe the notification May 26 22:10:16.302: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4723 /api/v1/namespaces/watch-4723/configmaps/e2e-watch-test-configmap-a 854fd3da-e46e-46b6-b8c3-e337124d2078 19393426 0 2020-05-26 22:10:16 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 26 22:10:16.302: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4723 /api/v1/namespaces/watch-4723/configmaps/e2e-watch-test-configmap-a 854fd3da-e46e-46b6-b8c3-e337124d2078 19393426 0 2020-05-26 22:10:16 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: modifying configmap A and ensuring the correct watchers observe the notification May 26 22:10:26.310: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4723 /api/v1/namespaces/watch-4723/configmaps/e2e-watch-test-configmap-a 854fd3da-e46e-46b6-b8c3-e337124d2078 19393499 0 2020-05-26 22:10:16 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} May 26 22:10:26.311: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4723 /api/v1/namespaces/watch-4723/configmaps/e2e-watch-test-configmap-a 854fd3da-e46e-46b6-b8c3-e337124d2078 19393499 0 2020-05-26 22:10:16 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 1,},BinaryData:map[string][]byte{},} STEP: modifying configmap A again and ensuring the correct watchers observe the notification May 26 22:10:36.319: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4723 /api/v1/namespaces/watch-4723/configmaps/e2e-watch-test-configmap-a 854fd3da-e46e-46b6-b8c3-e337124d2078 19393529 0 2020-05-26 22:10:16 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 26 22:10:36.319: INFO: Got : MODIFIED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4723 /api/v1/namespaces/watch-4723/configmaps/e2e-watch-test-configmap-a 854fd3da-e46e-46b6-b8c3-e337124d2078 19393529 0 2020-05-26 22:10:16 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: deleting configmap A and ensuring the correct watchers observe the notification May 26 22:10:46.327: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4723 /api/v1/namespaces/watch-4723/configmaps/e2e-watch-test-configmap-a 854fd3da-e46e-46b6-b8c3-e337124d2078 19393559 0 2020-05-26 22:10:16 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} May 26 22:10:46.327: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-a watch-4723 /api/v1/namespaces/watch-4723/configmaps/e2e-watch-test-configmap-a 854fd3da-e46e-46b6-b8c3-e337124d2078 19393559 0 2020-05-26 22:10:16 +0000 UTC map[watch-this-configmap:multiple-watchers-A] map[] [] [] []},Data:map[string]string{mutation: 2,},BinaryData:map[string][]byte{},} STEP: creating a configmap with label B and ensuring the correct watchers observe the notification May 26 22:10:56.335: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4723 /api/v1/namespaces/watch-4723/configmaps/e2e-watch-test-configmap-b 2d07cce9-84f1-4440-8fbe-edd55113c787 19393589 0 2020-05-26 22:10:56 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 26 22:10:56.335: INFO: Got : ADDED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4723 /api/v1/namespaces/watch-4723/configmaps/e2e-watch-test-configmap-b 2d07cce9-84f1-4440-8fbe-edd55113c787 19393589 0 2020-05-26 22:10:56 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} STEP: deleting configmap B and ensuring the correct watchers observe the notification May 26 22:11:06.344: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4723 /api/v1/namespaces/watch-4723/configmaps/e2e-watch-test-configmap-b 2d07cce9-84f1-4440-8fbe-edd55113c787 19393620 0 2020-05-26 22:10:56 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} May 26 22:11:06.345: INFO: Got : DELETED &ConfigMap{ObjectMeta:{e2e-watch-test-configmap-b watch-4723 /api/v1/namespaces/watch-4723/configmaps/e2e-watch-test-configmap-b 2d07cce9-84f1-4440-8fbe-edd55113c787 19393620 0 2020-05-26 22:10:56 +0000 UTC map[watch-this-configmap:multiple-watchers-B] map[] [] [] []},Data:map[string]string{},BinaryData:map[string][]byte{},} [AfterEach] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:11:16.345: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "watch-4723" for this suite. • [SLOW TEST:60.163 seconds] [sig-api-machinery] Watchers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should observe add, update, and delete watch notifications on configmaps [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] Watchers should observe add, update, and delete watch notifications on configmaps [Conformance]","total":278,"completed":226,"skipped":3487,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:11:16.356: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 22:11:16.464: INFO: Pod name rollover-pod: Found 0 pods out of 1 May 26 22:11:21.467: INFO: Pod name rollover-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 26 22:11:21.467: INFO: Waiting for pods owned by replica set "test-rollover-controller" to become ready May 26 22:11:23.471: INFO: Creating deployment "test-rollover-deployment" May 26 22:11:23.500: INFO: Make sure deployment "test-rollover-deployment" performs scaling operations May 26 22:11:25.507: INFO: Check revision of new replica set for deployment "test-rollover-deployment" May 26 22:11:25.513: INFO: Ensure that both replica sets have 1 created replica May 26 22:11:25.518: INFO: Rollover old replica sets for deployment "test-rollover-deployment" with new image update May 26 22:11:25.523: INFO: Updating deployment test-rollover-deployment May 26 22:11:25.524: INFO: Wait deployment "test-rollover-deployment" to be observed by the deployment controller May 26 22:11:27.830: INFO: Wait for revision update of deployment "test-rollover-deployment" to 2 May 26 22:11:27.838: INFO: Make sure deployment "test-rollover-deployment" is complete May 26 22:11:27.844: INFO: all replica sets need to contain the pod-template-hash label May 26 22:11:27.844: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:1, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127885, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 22:11:29.852: INFO: all replica sets need to contain the pod-template-hash label May 26 22:11:29.852: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127889, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 22:11:31.853: INFO: all replica sets need to contain the pod-template-hash label May 26 22:11:31.853: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127889, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 22:11:33.851: INFO: all replica sets need to contain the pod-template-hash label May 26 22:11:33.851: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127889, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 22:11:35.852: INFO: all replica sets need to contain the pod-template-hash label May 26 22:11:35.852: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127889, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 22:11:37.854: INFO: all replica sets need to contain the pod-template-hash label May 26 22:11:37.854: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:2, Replicas:2, UpdatedReplicas:1, ReadyReplicas:2, AvailableReplicas:1, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasAvailable", Message:"Deployment has minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127889, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127883, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"test-rollover-deployment-574d6dfbff\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 22:11:39.852: INFO: May 26 22:11:39.852: INFO: Ensure that both old replica sets have no replicas [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 26 22:11:39.860: INFO: Deployment "test-rollover-deployment": &Deployment{ObjectMeta:{test-rollover-deployment deployment-4151 /apis/apps/v1/namespaces/deployment-4151/deployments/test-rollover-deployment 74ccf324-1b30-43d5-82e6-9d102f981483 19393802 2 2020-05-26 22:11:23 +0000 UTC map[name:rollover-pod] map[deployment.kubernetes.io/revision:2] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000351098 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:0,MaxSurge:1,},},MinReadySeconds:10,RevisionHistoryLimit:*10,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:2,Replicas:1,UpdatedReplicas:1,AvailableReplicas:1,UnavailableReplicas:0,Conditions:[]DeploymentCondition{DeploymentCondition{Type:Available,Status:True,Reason:MinimumReplicasAvailable,Message:Deployment has minimum availability.,LastUpdateTime:2020-05-26 22:11:23 +0000 UTC,LastTransitionTime:2020-05-26 22:11:23 +0000 UTC,},DeploymentCondition{Type:Progressing,Status:True,Reason:NewReplicaSetAvailable,Message:ReplicaSet "test-rollover-deployment-574d6dfbff" has successfully progressed.,LastUpdateTime:2020-05-26 22:11:39 +0000 UTC,LastTransitionTime:2020-05-26 22:11:23 +0000 UTC,},},ReadyReplicas:1,CollisionCount:nil,},} May 26 22:11:39.863: INFO: New ReplicaSet "test-rollover-deployment-574d6dfbff" of Deployment "test-rollover-deployment": &ReplicaSet{ObjectMeta:{test-rollover-deployment-574d6dfbff deployment-4151 /apis/apps/v1/namespaces/deployment-4151/replicasets/test-rollover-deployment-574d6dfbff bbbf28b3-a081-4792-91a7-acbdcccc12e7 19393791 2 2020-05-26 22:11:25 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:2] [{apps/v1 Deployment test-rollover-deployment 74ccf324-1b30-43d5-82e6-9d102f981483 0xc000351917 0xc000351918}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: 574d6dfbff,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0003519d8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:2,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 26 22:11:39.863: INFO: All old ReplicaSets of Deployment "test-rollover-deployment": May 26 22:11:39.863: INFO: &ReplicaSet{ObjectMeta:{test-rollover-controller deployment-4151 /apis/apps/v1/namespaces/deployment-4151/replicasets/test-rollover-controller 01179ddb-1a8b-4224-8936-5d0889af4e24 19393800 2 2020-05-26 22:11:16 +0000 UTC map[name:rollover-pod pod:httpd] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2] [{apps/v1 Deployment test-rollover-deployment 74ccf324-1b30-43d5-82e6-9d102f981483 0xc000351787 0xc000351788}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc000351838 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 26 22:11:39.863: INFO: &ReplicaSet{ObjectMeta:{test-rollover-deployment-f6c94f66c deployment-4151 /apis/apps/v1/namespaces/deployment-4151/replicasets/test-rollover-deployment-f6c94f66c a743f4d4-13a1-484c-b97c-20a56e8ac1b4 19393739 2 2020-05-26 22:11:23 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-rollover-deployment 74ccf324-1b30-43d5-82e6-9d102f981483 0xc000351a40 0xc000351a41}] [] []},Spec:ReplicaSetSpec{Replicas:*0,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: rollover-pod,pod-template-hash: f6c94f66c,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:rollover-pod pod-template-hash:f6c94f66c] map[] [] [] []} {[] [] [{redis-slave gcr.io/google_samples/gb-redisslave:nonexistent [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc000351b88 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:10,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:2,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 26 22:11:39.867: INFO: Pod "test-rollover-deployment-574d6dfbff-k62cd" is available: &Pod{ObjectMeta:{test-rollover-deployment-574d6dfbff-k62cd test-rollover-deployment-574d6dfbff- deployment-4151 /api/v1/namespaces/deployment-4151/pods/test-rollover-deployment-574d6dfbff-k62cd 839c2fa7-904d-4e9f-a1c1-ea5f8400fa24 19393759 0 2020-05-26 22:11:25 +0000 UTC map[name:rollover-pod pod-template-hash:574d6dfbff] map[] [{apps/v1 ReplicaSet test-rollover-deployment-574d6dfbff bbbf28b3-a081-4792-91a7-acbdcccc12e7 0xc002cf8897 0xc002cf8898}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-6jplc,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-6jplc,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-6jplc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 22:11:25 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 22:11:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 22:11:29 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 22:11:25 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.184,StartTime:2020-05-26 22:11:25 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:agnhost,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 22:11:28 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,ImageID:gcr.io/kubernetes-e2e-test-images/agnhost@sha256:daf5332100521b1256d0e3c56d697a238eaec3af48897ed9167cbadd426773b5,ContainerID:containerd://8748fcd98f97e8770c25732e403875871a95a025badd8585c8d68b28314c6119,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.184,},},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:11:39.867: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-4151" for this suite. • [SLOW TEST:23.520 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should support rollover [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should support rollover [Conformance]","total":278,"completed":227,"skipped":3501,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:11:39.877: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0777 on node default medium May 26 22:11:40.264: INFO: Waiting up to 5m0s for pod "pod-d8f3c030-a0fd-4063-9aea-2c634a322a1d" in namespace "emptydir-4252" to be "success or failure" May 26 22:11:40.274: INFO: Pod "pod-d8f3c030-a0fd-4063-9aea-2c634a322a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 9.362485ms May 26 22:11:42.278: INFO: Pod "pod-d8f3c030-a0fd-4063-9aea-2c634a322a1d": Phase="Pending", Reason="", readiness=false. Elapsed: 2.013734625s May 26 22:11:44.282: INFO: Pod "pod-d8f3c030-a0fd-4063-9aea-2c634a322a1d": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.01823207s STEP: Saw pod success May 26 22:11:44.282: INFO: Pod "pod-d8f3c030-a0fd-4063-9aea-2c634a322a1d" satisfied condition "success or failure" May 26 22:11:44.286: INFO: Trying to get logs from node jerma-worker2 pod pod-d8f3c030-a0fd-4063-9aea-2c634a322a1d container test-container: STEP: delete the pod May 26 22:11:44.343: INFO: Waiting for pod pod-d8f3c030-a0fd-4063-9aea-2c634a322a1d to disappear May 26 22:11:44.358: INFO: Pod pod-d8f3c030-a0fd-4063-9aea-2c634a322a1d no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:11:44.358: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-4252" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0777,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":228,"skipped":3510,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:11:44.366: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ConfigMap STEP: Ensuring resource quota status captures configMap creation STEP: Deleting a ConfigMap STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:12:00.562: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-2623" for this suite. • [SLOW TEST:16.206 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a configMap. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a configMap. [Conformance]","total":278,"completed":229,"skipped":3543,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:12:00.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename custom-resource-definition STEP: Waiting for a default service account to be provisioned in namespace [It] should include custom resource definition resources in discovery documents [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: fetching the /apis discovery document STEP: finding the apiextensions.k8s.io API group in the /apis discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis discovery document STEP: fetching the /apis/apiextensions.k8s.io discovery document STEP: finding the apiextensions.k8s.io/v1 API group/version in the /apis/apiextensions.k8s.io discovery document STEP: fetching the /apis/apiextensions.k8s.io/v1 discovery document STEP: finding customresourcedefinitions resources in the /apis/apiextensions.k8s.io/v1 discovery document [AfterEach] [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:12:00.635: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "custom-resource-definition-7312" for this suite. •{"msg":"PASSED [sig-api-machinery] CustomResourceDefinition resources [Privileged:ClusterAdmin] should include custom resource definition resources in discovery documents [Conformance]","total":278,"completed":230,"skipped":3561,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:12:00.693: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 26 22:12:00.729: INFO: >>> kubeConfig: /root/.kube/config STEP: rename a version STEP: check the new version name is served STEP: check the old version name is removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:12:15.230: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-5616" for this suite. • [SLOW TEST:14.544 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 updates the published spec when one version gets renamed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] updates the published spec when one version gets renamed [Conformance]","total":278,"completed":231,"skipped":3575,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:12:15.238: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 22:12:15.640: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 22:12:17.651: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127935, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127935, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127935, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127935, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 22:12:20.686: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Listing all of the created validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules STEP: Deleting the collection of validation webhooks STEP: Creating a configMap that does not comply to the validation webhook rules [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:12:21.159: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-9358" for this suite. STEP: Destroying namespace "webhook-9358-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.003 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 listing validating webhooks should work [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] listing validating webhooks should work [Conformance]","total":278,"completed":232,"skipped":3611,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:12:21.241: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should support --unix-socket=/path [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Starting the proxy May 26 22:12:21.321: INFO: Asynchronously running '/usr/local/bin/kubectl kubectl --kubeconfig=/root/.kube/config proxy --unix-socket=/tmp/kubectl-proxy-unix589753344/test' STEP: retrieving proxy /api/ output [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:12:21.406: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-9667" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Proxy server should support --unix-socket=/path [Conformance]","total":278,"completed":233,"skipped":3635,"failed":0} SSSSSSSSSSS ------------------------------ [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:12:21.427: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [It] should provide host IP as an env var [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward api env vars May 26 22:12:21.490: INFO: Waiting up to 5m0s for pod "downward-api-bed9e38f-cd11-43b4-a8aa-8da18fb5b59b" in namespace "downward-api-2481" to be "success or failure" May 26 22:12:21.518: INFO: Pod "downward-api-bed9e38f-cd11-43b4-a8aa-8da18fb5b59b": Phase="Pending", Reason="", readiness=false. Elapsed: 28.385244ms May 26 22:12:23.522: INFO: Pod "downward-api-bed9e38f-cd11-43b4-a8aa-8da18fb5b59b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.032461808s May 26 22:12:25.526: INFO: Pod "downward-api-bed9e38f-cd11-43b4-a8aa-8da18fb5b59b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.036682474s STEP: Saw pod success May 26 22:12:25.526: INFO: Pod "downward-api-bed9e38f-cd11-43b4-a8aa-8da18fb5b59b" satisfied condition "success or failure" May 26 22:12:25.530: INFO: Trying to get logs from node jerma-worker pod downward-api-bed9e38f-cd11-43b4-a8aa-8da18fb5b59b container dapi-container: STEP: delete the pod May 26 22:12:25.567: INFO: Waiting for pod downward-api-bed9e38f-cd11-43b4-a8aa-8da18fb5b59b to disappear May 26 22:12:25.571: INFO: Pod downward-api-bed9e38f-cd11-43b4-a8aa-8da18fb5b59b no longer exists [AfterEach] [sig-node] Downward API /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:12:25.571: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-2481" for this suite. •{"msg":"PASSED [sig-node] Downward API should provide host IP as an env var [NodeConformance] [Conformance]","total":278,"completed":234,"skipped":3646,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:12:25.579: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename downward-api STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/downwardapi_volume.go:40 [It] should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 22:12:25.734: INFO: Waiting up to 5m0s for pod "downwardapi-volume-7c2b3795-9e96-4e71-a0f4-8ea6abf4d153" in namespace "downward-api-4520" to be "success or failure" May 26 22:12:25.752: INFO: Pod "downwardapi-volume-7c2b3795-9e96-4e71-a0f4-8ea6abf4d153": Phase="Pending", Reason="", readiness=false. Elapsed: 18.013378ms May 26 22:12:27.756: INFO: Pod "downwardapi-volume-7c2b3795-9e96-4e71-a0f4-8ea6abf4d153": Phase="Pending", Reason="", readiness=false. Elapsed: 2.022629249s May 26 22:12:29.761: INFO: Pod "downwardapi-volume-7c2b3795-9e96-4e71-a0f4-8ea6abf4d153": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.026799743s STEP: Saw pod success May 26 22:12:29.761: INFO: Pod "downwardapi-volume-7c2b3795-9e96-4e71-a0f4-8ea6abf4d153" satisfied condition "success or failure" May 26 22:12:29.764: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-7c2b3795-9e96-4e71-a0f4-8ea6abf4d153 container client-container: STEP: delete the pod May 26 22:12:29.808: INFO: Waiting for pod downwardapi-volume-7c2b3795-9e96-4e71-a0f4-8ea6abf4d153 to disappear May 26 22:12:29.817: INFO: Pod downwardapi-volume-7c2b3795-9e96-4e71-a0f4-8ea6abf4d153 no longer exists [AfterEach] [sig-storage] Downward API volume /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:12:29.817: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "downward-api-4520" for this suite. •{"msg":"PASSED [sig-storage] Downward API volume should set DefaultMode on files [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":235,"skipped":3684,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Deployment deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:12:29.824: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename deployment STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:69 [It] deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 22:12:29.958: INFO: Pod name cleanup-pod: Found 0 pods out of 1 May 26 22:12:34.962: INFO: Pod name cleanup-pod: Found 1 pods out of 1 STEP: ensuring each pod is running May 26 22:12:34.962: INFO: Creating deployment test-cleanup-deployment STEP: Waiting for deployment test-cleanup-deployment history to be cleaned up [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/deployment.go:63 May 26 22:12:34.983: INFO: Deployment "test-cleanup-deployment": &Deployment{ObjectMeta:{test-cleanup-deployment deployment-706 /apis/apps/v1/namespaces/deployment-706/deployments/test-cleanup-deployment 9e1c5553-3d5f-40d3-aba1-e72877ed81a9 19394214 1 2020-05-26 22:12:34 +0000 UTC map[name:cleanup-pod] map[] [] [] []},Spec:DeploymentSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc0047dfea8 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},Strategy:DeploymentStrategy{Type:RollingUpdate,RollingUpdate:&RollingUpdateDeployment{MaxUnavailable:25%!,(MISSING)MaxSurge:25%!,(MISSING)},},MinReadySeconds:0,RevisionHistoryLimit:*0,Paused:false,ProgressDeadlineSeconds:*600,},Status:DeploymentStatus{ObservedGeneration:0,Replicas:0,UpdatedReplicas:0,AvailableReplicas:0,UnavailableReplicas:0,Conditions:[]DeploymentCondition{},ReadyReplicas:0,CollisionCount:nil,},} May 26 22:12:35.026: INFO: New ReplicaSet "test-cleanup-deployment-55ffc6b7b6" of Deployment "test-cleanup-deployment": &ReplicaSet{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6 deployment-706 /apis/apps/v1/namespaces/deployment-706/replicasets/test-cleanup-deployment-55ffc6b7b6 43d9a9eb-e715-4191-8812-249c4b4b9139 19394217 1 2020-05-26 22:12:34 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[deployment.kubernetes.io/desired-replicas:1 deployment.kubernetes.io/max-replicas:2 deployment.kubernetes.io/revision:1] [{apps/v1 Deployment test-cleanup-deployment 9e1c5553-3d5f-40d3-aba1-e72877ed81a9 0xc003817f27 0xc003817f28}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod-template-hash: 55ffc6b7b6,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [] [] []} {[] [] [{agnhost gcr.io/kubernetes-e2e-test-images/agnhost:2.8 [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,} false false false}] [] Always 0xc003817f98 ClusterFirst map[] false false false &PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:0,FullyLabeledReplicas:0,ObservedGeneration:0,ReadyReplicas:0,AvailableReplicas:0,Conditions:[]ReplicaSetCondition{},},} May 26 22:12:35.026: INFO: All old ReplicaSets of Deployment "test-cleanup-deployment": May 26 22:12:35.026: INFO: &ReplicaSet{ObjectMeta:{test-cleanup-controller deployment-706 /apis/apps/v1/namespaces/deployment-706/replicasets/test-cleanup-controller aa4c5632-9f6f-41a4-a173-a7752f2e94b4 19394216 1 2020-05-26 22:12:29 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 Deployment test-cleanup-deployment 9e1c5553-3d5f-40d3-aba1-e72877ed81a9 0xc003817e57 0xc003817e58}] [] []},Spec:ReplicaSetSpec{Replicas:*1,Selector:&v1.LabelSelector{MatchLabels:map[string]string{name: cleanup-pod,pod: httpd,},MatchExpressions:[]LabelSelectorRequirement{},},Template:{{ 0 0001-01-01 00:00:00 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [] [] []} {[] [] [{httpd docker.io/library/httpd:2.4.38-alpine [] [] [] [] [] {map[] map[]} [] [] nil nil nil nil /dev/termination-log File IfNotPresent nil false false false}] [] Always 0xc003817eb8 ClusterFirst map[] false false false PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,} [] nil default-scheduler [] [] nil [] map[] []}},MinReadySeconds:0,},Status:ReplicaSetStatus{Replicas:1,FullyLabeledReplicas:1,ObservedGeneration:1,ReadyReplicas:1,AvailableReplicas:1,Conditions:[]ReplicaSetCondition{},},} May 26 22:12:35.079: INFO: Pod "test-cleanup-controller-fxjkj" is available: &Pod{ObjectMeta:{test-cleanup-controller-fxjkj test-cleanup-controller- deployment-706 /api/v1/namespaces/deployment-706/pods/test-cleanup-controller-fxjkj 329e3ea5-498a-4c93-90c1-b314e65c0288 19394203 0 2020-05-26 22:12:29 +0000 UTC map[name:cleanup-pod pod:httpd] map[] [{apps/v1 ReplicaSet test-cleanup-controller aa4c5632-9f6f-41a4-a173-a7752f2e94b4 0xc004e503d7 0xc004e503d8}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bmwvp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bmwvp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:httpd,Image:docker.io/library/httpd:2.4.38-alpine,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bmwvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Running,Conditions:[]PodCondition{PodCondition{Type:Initialized,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 22:12:30 +0000 UTC,Reason:,Message:,},PodCondition{Type:Ready,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 22:12:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:ContainersReady,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 22:12:32 +0000 UTC,Reason:,Message:,},PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 22:12:29 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:172.17.0.10,PodIP:10.244.1.186,StartTime:2020-05-26 22:12:30 +0000 UTC,ContainerStatuses:[]ContainerStatus{ContainerStatus{Name:httpd,State:ContainerState{Waiting:nil,Running:&ContainerStateRunning{StartedAt:2020-05-26 22:12:32 +0000 UTC,},Terminated:nil,},LastTerminationState:ContainerState{Waiting:nil,Running:nil,Terminated:nil,},Ready:true,RestartCount:0,Image:docker.io/library/httpd:2.4.38-alpine,ImageID:docker.io/library/httpd@sha256:eb8ccf084cf3e80eece1add239effefd171eb39adbc154d33c14260d905d4060,ContainerID:containerd://fca955c457f18e249e3a6b3bd82ae6d59cc6d97a8289b695809e9cc8502d57e4,Started:*true,},},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{PodIP{IP:10.244.1.186,},},EphemeralContainerStatuses:[]ContainerStatus{},},} May 26 22:12:35.080: INFO: Pod "test-cleanup-deployment-55ffc6b7b6-bvtr4" is not available: &Pod{ObjectMeta:{test-cleanup-deployment-55ffc6b7b6-bvtr4 test-cleanup-deployment-55ffc6b7b6- deployment-706 /api/v1/namespaces/deployment-706/pods/test-cleanup-deployment-55ffc6b7b6-bvtr4 c2721ff8-07e0-42b6-ac37-5a4204ace400 19394223 0 2020-05-26 22:12:35 +0000 UTC map[name:cleanup-pod pod-template-hash:55ffc6b7b6] map[] [{apps/v1 ReplicaSet test-cleanup-deployment-55ffc6b7b6 43d9a9eb-e715-4191-8812-249c4b4b9139 0xc004e50567 0xc004e50568}] [] []},Spec:PodSpec{Volumes:[]Volume{Volume{Name:default-token-bmwvp,VolumeSource:VolumeSource{HostPath:nil,EmptyDir:nil,GCEPersistentDisk:nil,AWSElasticBlockStore:nil,GitRepo:nil,Secret:&SecretVolumeSource{SecretName:default-token-bmwvp,Items:[]KeyToPath{},DefaultMode:*420,Optional:nil,},NFS:nil,ISCSI:nil,Glusterfs:nil,PersistentVolumeClaim:nil,RBD:nil,FlexVolume:nil,Cinder:nil,CephFS:nil,Flocker:nil,DownwardAPI:nil,FC:nil,AzureFile:nil,ConfigMap:nil,VsphereVolume:nil,Quobyte:nil,AzureDisk:nil,PhotonPersistentDisk:nil,PortworxVolume:nil,ScaleIO:nil,Projected:nil,StorageOS:nil,CSI:nil,},},},Containers:[]Container{Container{Name:agnhost,Image:gcr.io/kubernetes-e2e-test-images/agnhost:2.8,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:default-token-bmwvp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,},},RestartPolicy:Always,TerminationGracePeriodSeconds:*0,ActiveDeadlineSeconds:nil,DNSPolicy:ClusterFirst,NodeSelector:map[string]string{},ServiceAccountName:default,DeprecatedServiceAccount:default,NodeName:jerma-worker,HostNetwork:false,HostPID:false,HostIPC:false,SecurityContext:&PodSecurityContext{SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,SupplementalGroups:[],FSGroup:nil,RunAsGroup:nil,Sysctls:[]Sysctl{},WindowsOptions:nil,},ImagePullSecrets:[]LocalObjectReference{},Hostname:,Subdomain:,Affinity:nil,SchedulerName:default-scheduler,InitContainers:[]Container{},AutomountServiceAccountToken:nil,Tolerations:[]Toleration{Toleration{Key:node.kubernetes.io/not-ready,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},Toleration{Key:node.kubernetes.io/unreachable,Operator:Exists,Value:,Effect:NoExecute,TolerationSeconds:*300,},},HostAliases:[]HostAlias{},PriorityClassName:,Priority:*0,DNSConfig:nil,ShareProcessNamespace:nil,ReadinessGates:[]PodReadinessGate{},RuntimeClassName:nil,EnableServiceLinks:*true,PreemptionPolicy:nil,Overhead:ResourceList{},TopologySpreadConstraints:[]TopologySpreadConstraint{},EphemeralContainers:[]EphemeralContainer{},},Status:PodStatus{Phase:Pending,Conditions:[]PodCondition{PodCondition{Type:PodScheduled,Status:True,LastProbeTime:0001-01-01 00:00:00 +0000 UTC,LastTransitionTime:2020-05-26 22:12:35 +0000 UTC,Reason:,Message:,},},Message:,Reason:,HostIP:,PodIP:,StartTime:,ContainerStatuses:[]ContainerStatus{},QOSClass:BestEffort,InitContainerStatuses:[]ContainerStatus{},NominatedNodeName:,PodIPs:[]PodIP{},EphemeralContainerStatuses:[]ContainerStatus{},},} [AfterEach] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:12:35.080: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "deployment-706" for this suite. • [SLOW TEST:5.334 seconds] [sig-apps] Deployment /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 deployment should delete old replica sets [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Deployment deployment should delete old replica sets [Conformance]","total":278,"completed":236,"skipped":3739,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:12:35.158: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name configmap-test-volume-map-0402ca5f-ba04-4ff3-b823-995ef5e8d733 STEP: Creating a pod to test consume configMaps May 26 22:12:35.290: INFO: Waiting up to 5m0s for pod "pod-configmaps-190d2b0e-45dd-48fc-838d-fabffb3ad6c0" in namespace "configmap-326" to be "success or failure" May 26 22:12:35.294: INFO: Pod "pod-configmaps-190d2b0e-45dd-48fc-838d-fabffb3ad6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 3.959311ms May 26 22:12:37.510: INFO: Pod "pod-configmaps-190d2b0e-45dd-48fc-838d-fabffb3ad6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.220255822s May 26 22:12:39.515: INFO: Pod "pod-configmaps-190d2b0e-45dd-48fc-838d-fabffb3ad6c0": Phase="Pending", Reason="", readiness=false. Elapsed: 4.224663455s May 26 22:12:41.519: INFO: Pod "pod-configmaps-190d2b0e-45dd-48fc-838d-fabffb3ad6c0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.228635426s STEP: Saw pod success May 26 22:12:41.519: INFO: Pod "pod-configmaps-190d2b0e-45dd-48fc-838d-fabffb3ad6c0" satisfied condition "success or failure" May 26 22:12:41.523: INFO: Trying to get logs from node jerma-worker2 pod pod-configmaps-190d2b0e-45dd-48fc-838d-fabffb3ad6c0 container configmap-volume-test: STEP: delete the pod May 26 22:12:41.654: INFO: Waiting for pod pod-configmaps-190d2b0e-45dd-48fc-838d-fabffb3ad6c0 to disappear May 26 22:12:41.668: INFO: Pod pod-configmaps-190d2b0e-45dd-48fc-838d-fabffb3ad6c0 no longer exists [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:12:41.668: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-326" for this suite. • [SLOW TEST:6.523 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap should be consumable from pods in volume with mappings and Item mode set [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":237,"skipped":3748,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:12:41.681: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0666 on node default medium May 26 22:12:41.774: INFO: Waiting up to 5m0s for pod "pod-4287ca08-c21e-4ff1-acd4-5f8b9a42b22b" in namespace "emptydir-737" to be "success or failure" May 26 22:12:41.782: INFO: Pod "pod-4287ca08-c21e-4ff1-acd4-5f8b9a42b22b": Phase="Pending", Reason="", readiness=false. Elapsed: 8.252762ms May 26 22:12:43.862: INFO: Pod "pod-4287ca08-c21e-4ff1-acd4-5f8b9a42b22b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.087670584s May 26 22:12:45.866: INFO: Pod "pod-4287ca08-c21e-4ff1-acd4-5f8b9a42b22b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.092144736s STEP: Saw pod success May 26 22:12:45.866: INFO: Pod "pod-4287ca08-c21e-4ff1-acd4-5f8b9a42b22b" satisfied condition "success or failure" May 26 22:12:45.869: INFO: Trying to get logs from node jerma-worker pod pod-4287ca08-c21e-4ff1-acd4-5f8b9a42b22b container test-container: STEP: delete the pod May 26 22:12:45.944: INFO: Waiting for pod pod-4287ca08-c21e-4ff1-acd4-5f8b9a42b22b to disappear May 26 22:12:45.949: INFO: Pod pod-4287ca08-c21e-4ff1-acd4-5f8b9a42b22b no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:12:45.949: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-737" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0666,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":238,"skipped":3763,"failed":0} SSSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:12:45.959: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: check poststart hook STEP: delete the pod with lifecycle hook May 26 22:12:54.162: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 26 22:12:54.183: INFO: Pod pod-with-poststart-http-hook still exists May 26 22:12:56.183: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 26 22:12:56.187: INFO: Pod pod-with-poststart-http-hook still exists May 26 22:12:58.183: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 26 22:12:58.186: INFO: Pod pod-with-poststart-http-hook still exists May 26 22:13:00.183: INFO: Waiting for pod pod-with-poststart-http-hook to disappear May 26 22:13:00.186: INFO: Pod pod-with-poststart-http-hook no longer exists [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:13:00.186: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-9513" for this suite. • [SLOW TEST:14.235 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute poststart http hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute poststart http hook properly [NodeConformance] [Conformance]","total":278,"completed":239,"skipped":3769,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:13:00.195: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1525 [It] should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 26 22:13:00.310: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-rc --image=docker.io/library/httpd:2.4.38-alpine --generator=run/v1 --namespace=kubectl-1638' May 26 22:13:03.264: INFO: stderr: "kubectl run --generator=run/v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 26 22:13:03.264: INFO: stdout: "replicationcontroller/e2e-test-httpd-rc created\n" STEP: verifying the rc e2e-test-httpd-rc was created STEP: verifying the pod controlled by rc e2e-test-httpd-rc was created STEP: confirm that you can get logs from an rc May 26 22:13:03.302: INFO: Waiting up to 5m0s for 1 pods to be running and ready: [e2e-test-httpd-rc-wl9xt] May 26 22:13:03.302: INFO: Waiting up to 5m0s for pod "e2e-test-httpd-rc-wl9xt" in namespace "kubectl-1638" to be "running and ready" May 26 22:13:03.394: INFO: Pod "e2e-test-httpd-rc-wl9xt": Phase="Pending", Reason="", readiness=false. Elapsed: 91.548366ms May 26 22:13:05.397: INFO: Pod "e2e-test-httpd-rc-wl9xt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.094688344s May 26 22:13:07.402: INFO: Pod "e2e-test-httpd-rc-wl9xt": Phase="Running", Reason="", readiness=true. Elapsed: 4.099260765s May 26 22:13:07.402: INFO: Pod "e2e-test-httpd-rc-wl9xt" satisfied condition "running and ready" May 26 22:13:07.402: INFO: Wanted all 1 pods to be running and ready. Result: true. Pods: [e2e-test-httpd-rc-wl9xt] May 26 22:13:07.402: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config logs rc/e2e-test-httpd-rc --namespace=kubectl-1638' May 26 22:13:07.502: INFO: stderr: "" May 26 22:13:07.502: INFO: stdout: "AH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.190. Set the 'ServerName' directive globally to suppress this message\nAH00558: httpd: Could not reliably determine the server's fully qualified domain name, using 10.244.1.190. Set the 'ServerName' directive globally to suppress this message\n[Tue May 26 22:13:06.169090 2020] [mpm_event:notice] [pid 1:tid 140648867105640] AH00489: Apache/2.4.38 (Unix) configured -- resuming normal operations\n[Tue May 26 22:13:06.169242 2020] [core:notice] [pid 1:tid 140648867105640] AH00094: Command line: 'httpd -D FOREGROUND'\n" [AfterEach] Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1530 May 26 22:13:07.502: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete rc e2e-test-httpd-rc --namespace=kubectl-1638' May 26 22:13:07.602: INFO: stderr: "" May 26 22:13:07.602: INFO: stdout: "replicationcontroller \"e2e-test-httpd-rc\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:13:07.602: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1638" for this suite. • [SLOW TEST:7.464 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Kubectl run rc /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1521 should create an rc from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Kubectl run rc should create an rc from an image [Conformance]","total":278,"completed":240,"skipped":3807,"failed":0} SSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:13:07.659: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 22:13:08.163: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 22:13:10.171: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127988, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127988, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127988, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726127988, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 22:13:13.258: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 22:13:13.262: INFO: >>> kubeConfig: /root/.kube/config STEP: Registering the custom resource webhook via the AdmissionRegistration API STEP: Creating a custom resource that should be denied by the webhook STEP: Creating a custom resource whose deletion would be denied by the webhook STEP: Updating the custom resource with disallowed data should be denied STEP: Deleting the custom resource should be denied STEP: Remove the offending key and value from the custom resource data STEP: Deleting the updated custom resource should be successful [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:13:14.490: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-1788" for this suite. STEP: Destroying namespace "webhook-1788-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:6.939 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny custom resource creation, update and deletion [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny custom resource creation, update and deletion [Conformance]","total":278,"completed":241,"skipped":3825,"failed":0} SSSSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:13:14.598: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: set up a multi version CRD May 26 22:13:14.680: INFO: >>> kubeConfig: /root/.kube/config STEP: mark a version not serverd STEP: check the unserved version gets removed STEP: check the other version is not changed [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:13:29.800: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-923" for this suite. • [SLOW TEST:15.207 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 removes definition from spec when one version gets changed to not be served [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] removes definition from spec when one version gets changed to not be served [Conformance]","total":278,"completed":242,"skipped":3830,"failed":0} SSS ------------------------------ [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:13:29.805: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-publish-openapi STEP: Waiting for a default service account to be provisioned in namespace [It] works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 22:13:29.875: INFO: >>> kubeConfig: /root/.kube/config STEP: client-side validation (kubectl create and apply) allows request with any unknown properties May 26 22:13:32.807: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-872 create -f -' May 26 22:13:37.582: INFO: stderr: "" May 26 22:13:37.582: INFO: stdout: "e2e-test-crd-publish-openapi-2358-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 26 22:13:37.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-872 delete e2e-test-crd-publish-openapi-2358-crds test-cr' May 26 22:13:37.704: INFO: stderr: "" May 26 22:13:37.704: INFO: stdout: "e2e-test-crd-publish-openapi-2358-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" May 26 22:13:37.704: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-872 apply -f -' May 26 22:13:40.710: INFO: stderr: "" May 26 22:13:40.710: INFO: stdout: "e2e-test-crd-publish-openapi-2358-crd.crd-publish-openapi-test-unknown-in-nested.example.com/test-cr created\n" May 26 22:13:40.710: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config --namespace=crd-publish-openapi-872 delete e2e-test-crd-publish-openapi-2358-crds test-cr' May 26 22:13:40.816: INFO: stderr: "" May 26 22:13:40.816: INFO: stdout: "e2e-test-crd-publish-openapi-2358-crd.crd-publish-openapi-test-unknown-in-nested.example.com \"test-cr\" deleted\n" STEP: kubectl explain works to explain CR May 26 22:13:40.816: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config explain e2e-test-crd-publish-openapi-2358-crds' May 26 22:13:42.987: INFO: stderr: "" May 26 22:13:42.987: INFO: stdout: "KIND: E2e-test-crd-publish-openapi-2358-crd\nVERSION: crd-publish-openapi-test-unknown-in-nested.example.com/v1\n\nDESCRIPTION:\n preserve-unknown-properties in nested field for Testing\n\nFIELDS:\n apiVersion\t\n APIVersion defines the versioned schema of this representation of an\n object. Servers should convert recognized schemas to the latest internal\n value, and may reject unrecognized values. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources\n\n kind\t\n Kind is a string value representing the REST resource this object\n represents. Servers may infer this from the endpoint the client submits\n requests to. Cannot be updated. In CamelCase. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds\n\n metadata\t\n Standard object's metadata. More info:\n https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata\n\n spec\t\n Specification of Waldo\n\n status\t\n Status of Waldo\n\n" [AfterEach] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:13:45.870: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-publish-openapi-872" for this suite. • [SLOW TEST:16.073 seconds] [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 works for CRD preserving unknown fields in an embedded object [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourcePublishOpenAPI [Privileged:ClusterAdmin] works for CRD preserving unknown fields in an embedded object [Conformance]","total":278,"completed":243,"skipped":3833,"failed":0} S ------------------------------ [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:13:45.878: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should set mode on item file [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 22:13:46.002: INFO: Waiting up to 5m0s for pod "downwardapi-volume-f3c74749-1704-4c63-b8db-0639b0e8984a" in namespace "projected-701" to be "success or failure" May 26 22:13:46.009: INFO: Pod "downwardapi-volume-f3c74749-1704-4c63-b8db-0639b0e8984a": Phase="Pending", Reason="", readiness=false. Elapsed: 7.515458ms May 26 22:13:48.244: INFO: Pod "downwardapi-volume-f3c74749-1704-4c63-b8db-0639b0e8984a": Phase="Pending", Reason="", readiness=false. Elapsed: 2.242161677s May 26 22:13:50.248: INFO: Pod "downwardapi-volume-f3c74749-1704-4c63-b8db-0639b0e8984a": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.246299667s STEP: Saw pod success May 26 22:13:50.248: INFO: Pod "downwardapi-volume-f3c74749-1704-4c63-b8db-0639b0e8984a" satisfied condition "success or failure" May 26 22:13:50.251: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-f3c74749-1704-4c63-b8db-0639b0e8984a container client-container: STEP: delete the pod May 26 22:13:50.293: INFO: Waiting for pod downwardapi-volume-f3c74749-1704-4c63-b8db-0639b0e8984a to disappear May 26 22:13:50.308: INFO: Pod downwardapi-volume-f3c74749-1704-4c63-b8db-0639b0e8984a no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:13:50.308: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-701" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should set mode on item file [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":244,"skipped":3834,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:13:50.315: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 22:13:50.418: INFO: Create a RollingUpdate DaemonSet May 26 22:13:50.421: INFO: Check that daemon pods launch on every node of the cluster May 26 22:13:50.425: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 22:13:50.456: INFO: Number of nodes with available pods: 0 May 26 22:13:50.456: INFO: Node jerma-worker is running more than one daemon pod May 26 22:13:51.462: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 22:13:51.466: INFO: Number of nodes with available pods: 0 May 26 22:13:51.466: INFO: Node jerma-worker is running more than one daemon pod May 26 22:13:52.465: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 22:13:52.470: INFO: Number of nodes with available pods: 0 May 26 22:13:52.470: INFO: Node jerma-worker is running more than one daemon pod May 26 22:13:53.462: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 22:13:53.466: INFO: Number of nodes with available pods: 0 May 26 22:13:53.466: INFO: Node jerma-worker is running more than one daemon pod May 26 22:13:54.462: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 22:13:54.466: INFO: Number of nodes with available pods: 1 May 26 22:13:54.466: INFO: Node jerma-worker2 is running more than one daemon pod May 26 22:13:55.462: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 22:13:55.485: INFO: Number of nodes with available pods: 2 May 26 22:13:55.485: INFO: Number of running nodes: 2, number of available pods: 2 May 26 22:13:55.485: INFO: Update the DaemonSet to trigger a rollout May 26 22:13:55.492: INFO: Updating DaemonSet daemon-set May 26 22:14:09.533: INFO: Roll back the DaemonSet before rollout is complete May 26 22:14:09.539: INFO: Updating DaemonSet daemon-set May 26 22:14:09.539: INFO: Make sure DaemonSet rollback is complete May 26 22:14:09.543: INFO: Wrong image for pod: daemon-set-6kb4q. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 26 22:14:09.543: INFO: Pod daemon-set-6kb4q is not available May 26 22:14:09.606: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 22:14:10.611: INFO: Wrong image for pod: daemon-set-6kb4q. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 26 22:14:10.611: INFO: Pod daemon-set-6kb4q is not available May 26 22:14:10.616: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 22:14:12.013: INFO: Wrong image for pod: daemon-set-6kb4q. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 26 22:14:12.013: INFO: Pod daemon-set-6kb4q is not available May 26 22:14:12.019: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 22:14:12.611: INFO: Wrong image for pod: daemon-set-6kb4q. Expected: docker.io/library/httpd:2.4.38-alpine, got: foo:non-existent. May 26 22:14:12.611: INFO: Pod daemon-set-6kb4q is not available May 26 22:14:12.642: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node May 26 22:14:13.610: INFO: Pod daemon-set-c8wcj is not available May 26 22:14:13.614: INFO: DaemonSet pods can't tolerate node jerma-control-plane with taints [{Key:node-role.kubernetes.io/master Value: Effect:NoSchedule TimeAdded:}], skip checking this node [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-7422, will wait for the garbage collector to delete the pods May 26 22:14:13.680: INFO: Deleting DaemonSet.extensions daemon-set took: 6.81877ms May 26 22:14:13.980: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.259196ms May 26 22:14:19.484: INFO: Number of nodes with available pods: 0 May 26 22:14:19.484: INFO: Number of running nodes: 0, number of available pods: 0 May 26 22:14:19.487: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-7422/daemonsets","resourceVersion":"19394901"},"items":null} May 26 22:14:19.489: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-7422/pods","resourceVersion":"19394901"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:14:19.498: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-7422" for this suite. • [SLOW TEST:29.190 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should rollback without unnecessary restarts [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should rollback without unnecessary restarts [Conformance]","total":278,"completed":245,"skipped":3858,"failed":0} SSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] ReplicationController should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:14:19.506: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename replication-controller STEP: Waiting for a default service account to be provisioned in namespace [It] should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Given a ReplicationController is created STEP: When the matched label of one of its pods change May 26 22:14:19.611: INFO: Pod name pod-release: Found 0 pods out of 1 May 26 22:14:24.743: INFO: Pod name pod-release: Found 1 pods out of 1 STEP: Then the pod is released [AfterEach] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:14:25.773: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "replication-controller-3940" for this suite. • [SLOW TEST:6.723 seconds] [sig-apps] ReplicationController /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should release no longer matching pods [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] ReplicationController should release no longer matching pods [Conformance]","total":278,"completed":246,"skipped":3881,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:14:26.230: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename proxy STEP: Waiting for a default service account to be provisioned in namespace [It] should proxy logs on node using proxy subresource [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 22:14:26.410: INFO: (0) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 7.235585ms) May 26 22:14:26.414: INFO: (1) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 3.954967ms) May 26 22:14:26.421: INFO: (2) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 7.079416ms) May 26 22:14:26.426: INFO: (3) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 4.542636ms) May 26 22:14:26.432: INFO: (4) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.790377ms) May 26 22:14:26.504: INFO: (5) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 72.304276ms) May 26 22:14:26.547: INFO: (6) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 42.94939ms) May 26 22:14:26.552: INFO: (7) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.030493ms) May 26 22:14:26.558: INFO: (8) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.625278ms) May 26 22:14:26.576: INFO: (9) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 18.378043ms) May 26 22:14:26.594: INFO: (10) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 18.002118ms) May 26 22:14:26.667: INFO: (11) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 72.307862ms) May 26 22:14:26.679: INFO: (12) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 12.079231ms) May 26 22:14:26.685: INFO: (13) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 6.158856ms) May 26 22:14:26.714: INFO: (14) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 28.920021ms) May 26 22:14:26.719: INFO: (15) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 5.517423ms) May 26 22:14:26.740: INFO: (16) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 20.122519ms) May 26 22:14:26.761: INFO: (17) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 21.816708ms) May 26 22:14:26.792: INFO: (18) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 30.614981ms) May 26 22:14:26.798: INFO: (19) /api/v1/nodes/jerma-worker/proxy/logs/:
containers/
pods/
(200; 6.047983ms) [AfterEach] version v1 /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:14:26.798: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "proxy-487" for this suite. •{"msg":"PASSED [sig-network] Proxy version v1 should proxy logs on node using proxy subresource [Conformance]","total":278,"completed":247,"skipped":3902,"failed":0} SSSSSSSSS ------------------------------ [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:14:26.828: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename secrets STEP: Waiting for a default service account to be provisioned in namespace [It] should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating secret with name secret-test-5b1153f6-b7f7-4206-a760-647fc3e11208 STEP: Creating a pod to test consume secrets May 26 22:14:26.994: INFO: Waiting up to 5m0s for pod "pod-secrets-0f1eeda1-0e30-46d4-baa8-90ee744e6d5f" in namespace "secrets-644" to be "success or failure" May 26 22:14:27.051: INFO: Pod "pod-secrets-0f1eeda1-0e30-46d4-baa8-90ee744e6d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 56.726843ms May 26 22:14:29.055: INFO: Pod "pod-secrets-0f1eeda1-0e30-46d4-baa8-90ee744e6d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 2.060507986s May 26 22:14:31.375: INFO: Pod "pod-secrets-0f1eeda1-0e30-46d4-baa8-90ee744e6d5f": Phase="Pending", Reason="", readiness=false. Elapsed: 4.380401435s May 26 22:14:33.378: INFO: Pod "pod-secrets-0f1eeda1-0e30-46d4-baa8-90ee744e6d5f": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.383605455s STEP: Saw pod success May 26 22:14:33.378: INFO: Pod "pod-secrets-0f1eeda1-0e30-46d4-baa8-90ee744e6d5f" satisfied condition "success or failure" May 26 22:14:33.380: INFO: Trying to get logs from node jerma-worker2 pod pod-secrets-0f1eeda1-0e30-46d4-baa8-90ee744e6d5f container secret-volume-test: STEP: delete the pod May 26 22:14:34.302: INFO: Waiting for pod pod-secrets-0f1eeda1-0e30-46d4-baa8-90ee744e6d5f to disappear May 26 22:14:34.341: INFO: Pod pod-secrets-0f1eeda1-0e30-46d4-baa8-90ee744e6d5f no longer exists [AfterEach] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:14:34.341: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "secrets-644" for this suite. • [SLOW TEST:7.520 seconds] [sig-storage] Secrets /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/secrets_volume.go:34 should be consumable in multiple volumes in a pod [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Secrets should be consumable in multiple volumes in a pod [NodeConformance] [Conformance]","total":278,"completed":248,"skipped":3911,"failed":0} S ------------------------------ [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:14:34.348: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename daemonsets STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:133 [It] should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 22:14:35.492: INFO: Creating daemon "daemon-set" with a node selector STEP: Initially, daemon pods should not be running on any nodes. May 26 22:14:35.549: INFO: Number of nodes with available pods: 0 May 26 22:14:35.549: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Change node label to blue, check that daemon pod is launched. May 26 22:14:35.670: INFO: Number of nodes with available pods: 0 May 26 22:14:35.670: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:36.730: INFO: Number of nodes with available pods: 0 May 26 22:14:36.730: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:37.674: INFO: Number of nodes with available pods: 0 May 26 22:14:37.674: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:38.773: INFO: Number of nodes with available pods: 0 May 26 22:14:38.773: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:39.675: INFO: Number of nodes with available pods: 0 May 26 22:14:39.675: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:40.749: INFO: Number of nodes with available pods: 1 May 26 22:14:40.749: INFO: Number of running nodes: 1, number of available pods: 1 STEP: Update the node label to green, and wait for daemons to be unscheduled May 26 22:14:40.983: INFO: Number of nodes with available pods: 1 May 26 22:14:40.983: INFO: Number of running nodes: 0, number of available pods: 1 May 26 22:14:41.988: INFO: Number of nodes with available pods: 0 May 26 22:14:41.988: INFO: Number of running nodes: 0, number of available pods: 0 STEP: Update DaemonSet node selector to green, and change its update strategy to RollingUpdate May 26 22:14:42.013: INFO: Number of nodes with available pods: 0 May 26 22:14:42.013: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:43.151: INFO: Number of nodes with available pods: 0 May 26 22:14:43.152: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:44.018: INFO: Number of nodes with available pods: 0 May 26 22:14:44.018: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:45.017: INFO: Number of nodes with available pods: 0 May 26 22:14:45.017: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:46.017: INFO: Number of nodes with available pods: 0 May 26 22:14:46.017: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:47.018: INFO: Number of nodes with available pods: 0 May 26 22:14:47.018: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:48.018: INFO: Number of nodes with available pods: 0 May 26 22:14:48.018: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:49.018: INFO: Number of nodes with available pods: 0 May 26 22:14:49.018: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:50.775: INFO: Number of nodes with available pods: 0 May 26 22:14:50.775: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:51.017: INFO: Number of nodes with available pods: 0 May 26 22:14:51.017: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:52.017: INFO: Number of nodes with available pods: 0 May 26 22:14:52.017: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:53.268: INFO: Number of nodes with available pods: 0 May 26 22:14:53.268: INFO: Node jerma-worker is running more than one daemon pod May 26 22:14:54.018: INFO: Number of nodes with available pods: 1 May 26 22:14:54.018: INFO: Number of running nodes: 1, number of available pods: 1 [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/daemon_set.go:99 STEP: Deleting DaemonSet "daemon-set" STEP: deleting DaemonSet.extensions daemon-set in namespace daemonsets-2686, will wait for the garbage collector to delete the pods May 26 22:14:54.083: INFO: Deleting DaemonSet.extensions daemon-set took: 6.664028ms May 26 22:14:54.383: INFO: Terminating DaemonSet.extensions daemon-set pods took: 300.196692ms May 26 22:15:09.287: INFO: Number of nodes with available pods: 0 May 26 22:15:09.287: INFO: Number of running nodes: 0, number of available pods: 0 May 26 22:15:09.290: INFO: daemonset: {"kind":"DaemonSetList","apiVersion":"apps/v1","metadata":{"selfLink":"/apis/apps/v1/namespaces/daemonsets-2686/daemonsets","resourceVersion":"19395188"},"items":null} May 26 22:15:09.292: INFO: pods: {"kind":"PodList","apiVersion":"v1","metadata":{"selfLink":"/api/v1/namespaces/daemonsets-2686/pods","resourceVersion":"19395188"},"items":null} [AfterEach] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:15:09.335: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "daemonsets-2686" for this suite. • [SLOW TEST:35.010 seconds] [sig-apps] Daemon set [Serial] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 should run and stop complex daemon [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] Daemon set [Serial] should run and stop complex daemon [Conformance]","total":278,"completed":249,"skipped":3912,"failed":0} SSSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:15:09.359: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on tmpfs May 26 22:15:09.512: INFO: Waiting up to 5m0s for pod "pod-b36b1f36-05b7-4cc9-a0ac-96fd316fec6c" in namespace "emptydir-7028" to be "success or failure" May 26 22:15:09.515: INFO: Pod "pod-b36b1f36-05b7-4cc9-a0ac-96fd316fec6c": Phase="Pending", Reason="", readiness=false. Elapsed: 3.432912ms May 26 22:15:11.520: INFO: Pod "pod-b36b1f36-05b7-4cc9-a0ac-96fd316fec6c": Phase="Pending", Reason="", readiness=false. Elapsed: 2.008112873s May 26 22:15:13.525: INFO: Pod "pod-b36b1f36-05b7-4cc9-a0ac-96fd316fec6c": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.012589198s STEP: Saw pod success May 26 22:15:13.525: INFO: Pod "pod-b36b1f36-05b7-4cc9-a0ac-96fd316fec6c" satisfied condition "success or failure" May 26 22:15:13.528: INFO: Trying to get logs from node jerma-worker pod pod-b36b1f36-05b7-4cc9-a0ac-96fd316fec6c container test-container: STEP: delete the pod May 26 22:15:13.566: INFO: Waiting for pod pod-b36b1f36-05b7-4cc9-a0ac-96fd316fec6c to disappear May 26 22:15:13.594: INFO: Pod pod-b36b1f36-05b7-4cc9-a0ac-96fd316fec6c no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:15:13.594: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-7028" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,tmpfs) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":250,"skipped":3916,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:15:13.602: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1489 [It] should create an rc or deployment from an image [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: running the image docker.io/library/httpd:2.4.38-alpine May 26 22:15:13.661: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config run e2e-test-httpd-deployment --image=docker.io/library/httpd:2.4.38-alpine --namespace=kubectl-8322' May 26 22:15:13.766: INFO: stderr: "kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.\n" May 26 22:15:13.766: INFO: stdout: "deployment.apps/e2e-test-httpd-deployment created\n" STEP: verifying the pod controlled by e2e-test-httpd-deployment gets created [AfterEach] Kubectl run default /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:1495 May 26 22:15:15.777: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete deployment e2e-test-httpd-deployment --namespace=kubectl-8322' May 26 22:15:15.959: INFO: stderr: "" May 26 22:15:15.959: INFO: stdout: "deployment.apps \"e2e-test-httpd-deployment\" deleted\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:15:15.959: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-8322" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl run default should create an rc or deployment from an image [Conformance]","total":278,"completed":251,"skipped":3928,"failed":0} SSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:15:16.092: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-9942 [It] should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a new StatefulSet May 26 22:15:16.402: INFO: Found 0 stateful pods, waiting for 3 May 26 22:15:26.438: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 22:15:26.438: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 22:15:26.438: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Pending - Ready=false May 26 22:15:36.407: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 22:15:36.407: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 22:15:36.407: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Updating stateful set template: update image from docker.io/library/httpd:2.4.38-alpine to docker.io/library/httpd:2.4.39-alpine May 26 22:15:36.435: INFO: Updating stateful set ss2 STEP: Creating a new revision STEP: Not applying an update when the partition is greater than the number of replicas STEP: Performing a canary update May 26 22:15:46.503: INFO: Updating stateful set ss2 May 26 22:15:46.534: INFO: Waiting for Pod statefulset-9942/ss2-2 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 STEP: Restoring Pods to the correct revision when they are deleted May 26 22:15:56.918: INFO: Found 2 stateful pods, waiting for 3 May 26 22:16:06.924: INFO: Waiting for pod ss2-0 to enter Running - Ready=true, currently Running - Ready=true May 26 22:16:06.924: INFO: Waiting for pod ss2-1 to enter Running - Ready=true, currently Running - Ready=true May 26 22:16:06.924: INFO: Waiting for pod ss2-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Performing a phased rolling update May 26 22:16:06.949: INFO: Updating stateful set ss2 May 26 22:16:06.991: INFO: Waiting for Pod statefulset-9942/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 26 22:16:16.999: INFO: Waiting for Pod statefulset-9942/ss2-1 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 May 26 22:16:27.017: INFO: Updating stateful set ss2 May 26 22:16:27.034: INFO: Waiting for StatefulSet statefulset-9942/ss2 to complete update May 26 22:16:27.034: INFO: Waiting for Pod statefulset-9942/ss2-0 to have revision ss2-84f9d6bf57 update revision ss2-65c7964b94 [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 26 22:16:37.042: INFO: Deleting all statefulset in ns statefulset-9942 May 26 22:16:37.046: INFO: Scaling statefulset ss2 to 0 May 26 22:16:47.068: INFO: Waiting for statefulset status.replicas updated to 0 May 26 22:16:47.071: INFO: Deleting statefulset ss2 [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:16:47.084: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-9942" for this suite. • [SLOW TEST:91.017 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 should perform canary updates and phased rolling updates of template modifications [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] should perform canary updates and phased rolling updates of template modifications [Conformance]","total":278,"completed":252,"skipped":3949,"failed":0} SSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:16:47.110: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should be able to update and delete ResourceQuota. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a ResourceQuota STEP: Getting a ResourceQuota STEP: Updating a ResourceQuota STEP: Verifying a ResourceQuota was modified STEP: Deleting a ResourceQuota STEP: Verifying the deleted ResourceQuota [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:16:47.232: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5072" for this suite. •{"msg":"PASSED [sig-api-machinery] ResourceQuota should be able to update and delete ResourceQuota. [Conformance]","total":278,"completed":253,"skipped":3964,"failed":0} SSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:16:47.239: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a Service STEP: Ensuring resource quota status captures service creation STEP: Deleting a Service STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:16:58.524: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-9769" for this suite. • [SLOW TEST:11.294 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a service. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a service. [Conformance]","total":278,"completed":254,"skipped":3973,"failed":0} SSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:16:58.534: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's cpu request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 22:16:58.648: INFO: Waiting up to 5m0s for pod "downwardapi-volume-1f5f087b-9f1e-4543-9710-e5a78a83e818" in namespace "projected-6744" to be "success or failure" May 26 22:16:58.652: INFO: Pod "downwardapi-volume-1f5f087b-9f1e-4543-9710-e5a78a83e818": Phase="Pending", Reason="", readiness=false. Elapsed: 3.376311ms May 26 22:17:00.656: INFO: Pod "downwardapi-volume-1f5f087b-9f1e-4543-9710-e5a78a83e818": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007717053s May 26 22:17:02.660: INFO: Pod "downwardapi-volume-1f5f087b-9f1e-4543-9710-e5a78a83e818": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011620814s STEP: Saw pod success May 26 22:17:02.660: INFO: Pod "downwardapi-volume-1f5f087b-9f1e-4543-9710-e5a78a83e818" satisfied condition "success or failure" May 26 22:17:02.663: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-1f5f087b-9f1e-4543-9710-e5a78a83e818 container client-container: STEP: delete the pod May 26 22:17:02.782: INFO: Waiting for pod downwardapi-volume-1f5f087b-9f1e-4543-9710-e5a78a83e818 to disappear May 26 22:17:02.794: INFO: Pod downwardapi-volume-1f5f087b-9f1e-4543-9710-e5a78a83e818 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:17:02.794: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6744" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's cpu request [NodeConformance] [Conformance]","total":278,"completed":255,"skipped":3985,"failed":0} SSSSSSSSSSS ------------------------------ [sig-network] Services should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:17:02.801: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating service nodeport-test with type=NodePort in namespace services-3543 STEP: creating replication controller nodeport-test in namespace services-3543 I0526 22:17:02.998890 6 runners.go:189] Created replication controller with name: nodeport-test, namespace: services-3543, replica count: 2 I0526 22:17:06.049486 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 22:17:09.049689 6 runners.go:189] nodeport-test Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 22:17:09.049: INFO: Creating new exec pod May 26 22:17:14.090: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3543 execpoddl5tj -- /bin/sh -x -c nc -zv -t -w 2 nodeport-test 80' May 26 22:17:14.381: INFO: stderr: "I0526 22:17:14.242185 3073 log.go:172] (0xc000515130) (0xc000928000) Create stream\nI0526 22:17:14.242248 3073 log.go:172] (0xc000515130) (0xc000928000) Stream added, broadcasting: 1\nI0526 22:17:14.245567 3073 log.go:172] (0xc000515130) Reply frame received for 1\nI0526 22:17:14.245624 3073 log.go:172] (0xc000515130) (0xc000314000) Create stream\nI0526 22:17:14.245653 3073 log.go:172] (0xc000515130) (0xc000314000) Stream added, broadcasting: 3\nI0526 22:17:14.246801 3073 log.go:172] (0xc000515130) Reply frame received for 3\nI0526 22:17:14.246856 3073 log.go:172] (0xc000515130) (0xc0009280a0) Create stream\nI0526 22:17:14.246875 3073 log.go:172] (0xc000515130) (0xc0009280a0) Stream added, broadcasting: 5\nI0526 22:17:14.247746 3073 log.go:172] (0xc000515130) Reply frame received for 5\nI0526 22:17:14.346386 3073 log.go:172] (0xc000515130) Data frame received for 5\nI0526 22:17:14.346407 3073 log.go:172] (0xc0009280a0) (5) Data frame handling\nI0526 22:17:14.346414 3073 log.go:172] (0xc0009280a0) (5) Data frame sent\n+ nc -zv -t -w 2 nodeport-test 80\nI0526 22:17:14.371442 3073 log.go:172] (0xc000515130) Data frame received for 3\nI0526 22:17:14.371487 3073 log.go:172] (0xc000314000) (3) Data frame handling\nI0526 22:17:14.371642 3073 log.go:172] (0xc000515130) Data frame received for 5\nI0526 22:17:14.371673 3073 log.go:172] (0xc0009280a0) (5) Data frame handling\nConnection to nodeport-test 80 port [tcp/http] succeeded!\nI0526 22:17:14.371780 3073 log.go:172] (0xc0009280a0) (5) Data frame sent\nI0526 22:17:14.371824 3073 log.go:172] (0xc000515130) Data frame received for 5\nI0526 22:17:14.371842 3073 log.go:172] (0xc0009280a0) (5) Data frame handling\nI0526 22:17:14.374047 3073 log.go:172] (0xc000515130) Data frame received for 1\nI0526 22:17:14.374079 3073 log.go:172] (0xc000928000) (1) Data frame handling\nI0526 22:17:14.374094 3073 log.go:172] (0xc000928000) (1) Data frame sent\nI0526 22:17:14.374110 3073 log.go:172] (0xc000515130) (0xc000928000) Stream removed, broadcasting: 1\nI0526 22:17:14.374189 3073 log.go:172] (0xc000515130) Go away received\nI0526 22:17:14.374488 3073 log.go:172] (0xc000515130) (0xc000928000) Stream removed, broadcasting: 1\nI0526 22:17:14.374510 3073 log.go:172] (0xc000515130) (0xc000314000) Stream removed, broadcasting: 3\nI0526 22:17:14.374520 3073 log.go:172] (0xc000515130) (0xc0009280a0) Stream removed, broadcasting: 5\n" May 26 22:17:14.381: INFO: stdout: "" May 26 22:17:14.382: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3543 execpoddl5tj -- /bin/sh -x -c nc -zv -t -w 2 10.109.16.85 80' May 26 22:17:14.582: INFO: stderr: "I0526 22:17:14.514479 3096 log.go:172] (0xc0008c2790) (0xc000605f40) Create stream\nI0526 22:17:14.514548 3096 log.go:172] (0xc0008c2790) (0xc000605f40) Stream added, broadcasting: 1\nI0526 22:17:14.517492 3096 log.go:172] (0xc0008c2790) Reply frame received for 1\nI0526 22:17:14.517545 3096 log.go:172] (0xc0008c2790) (0xc0005cc780) Create stream\nI0526 22:17:14.517570 3096 log.go:172] (0xc0008c2790) (0xc0005cc780) Stream added, broadcasting: 3\nI0526 22:17:14.518449 3096 log.go:172] (0xc0008c2790) Reply frame received for 3\nI0526 22:17:14.518482 3096 log.go:172] (0xc0008c2790) (0xc000281540) Create stream\nI0526 22:17:14.518505 3096 log.go:172] (0xc0008c2790) (0xc000281540) Stream added, broadcasting: 5\nI0526 22:17:14.519457 3096 log.go:172] (0xc0008c2790) Reply frame received for 5\nI0526 22:17:14.574934 3096 log.go:172] (0xc0008c2790) Data frame received for 3\nI0526 22:17:14.574999 3096 log.go:172] (0xc0005cc780) (3) Data frame handling\nI0526 22:17:14.575034 3096 log.go:172] (0xc0008c2790) Data frame received for 5\nI0526 22:17:14.575053 3096 log.go:172] (0xc000281540) (5) Data frame handling\nI0526 22:17:14.575076 3096 log.go:172] (0xc000281540) (5) Data frame sent\nI0526 22:17:14.575087 3096 log.go:172] (0xc0008c2790) Data frame received for 5\nI0526 22:17:14.575096 3096 log.go:172] (0xc000281540) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.16.85 80\nConnection to 10.109.16.85 80 port [tcp/http] succeeded!\nI0526 22:17:14.576650 3096 log.go:172] (0xc0008c2790) Data frame received for 1\nI0526 22:17:14.576690 3096 log.go:172] (0xc000605f40) (1) Data frame handling\nI0526 22:17:14.576723 3096 log.go:172] (0xc000605f40) (1) Data frame sent\nI0526 22:17:14.576742 3096 log.go:172] (0xc0008c2790) (0xc000605f40) Stream removed, broadcasting: 1\nI0526 22:17:14.576762 3096 log.go:172] (0xc0008c2790) Go away received\nI0526 22:17:14.577375 3096 log.go:172] (0xc0008c2790) (0xc000605f40) Stream removed, broadcasting: 1\nI0526 22:17:14.577398 3096 log.go:172] (0xc0008c2790) (0xc0005cc780) Stream removed, broadcasting: 3\nI0526 22:17:14.577410 3096 log.go:172] (0xc0008c2790) (0xc000281540) Stream removed, broadcasting: 5\n" May 26 22:17:14.582: INFO: stdout: "" May 26 22:17:14.582: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3543 execpoddl5tj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 30640' May 26 22:17:14.810: INFO: stderr: "I0526 22:17:14.718517 3118 log.go:172] (0xc0009be580) (0xc00095e000) Create stream\nI0526 22:17:14.718612 3118 log.go:172] (0xc0009be580) (0xc00095e000) Stream added, broadcasting: 1\nI0526 22:17:14.722025 3118 log.go:172] (0xc0009be580) Reply frame received for 1\nI0526 22:17:14.722062 3118 log.go:172] (0xc0009be580) (0xc00095e0a0) Create stream\nI0526 22:17:14.722074 3118 log.go:172] (0xc0009be580) (0xc00095e0a0) Stream added, broadcasting: 3\nI0526 22:17:14.723121 3118 log.go:172] (0xc0009be580) Reply frame received for 3\nI0526 22:17:14.723180 3118 log.go:172] (0xc0009be580) (0xc0005edb80) Create stream\nI0526 22:17:14.723212 3118 log.go:172] (0xc0009be580) (0xc0005edb80) Stream added, broadcasting: 5\nI0526 22:17:14.724375 3118 log.go:172] (0xc0009be580) Reply frame received for 5\nI0526 22:17:14.802296 3118 log.go:172] (0xc0009be580) Data frame received for 5\nI0526 22:17:14.802321 3118 log.go:172] (0xc0005edb80) (5) Data frame handling\nI0526 22:17:14.802336 3118 log.go:172] (0xc0005edb80) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 30640\nI0526 22:17:14.802430 3118 log.go:172] (0xc0009be580) Data frame received for 5\nI0526 22:17:14.802440 3118 log.go:172] (0xc0005edb80) (5) Data frame handling\nI0526 22:17:14.802447 3118 log.go:172] (0xc0005edb80) (5) Data frame sent\nConnection to 172.17.0.10 30640 port [tcp/30640] succeeded!\nI0526 22:17:14.802934 3118 log.go:172] (0xc0009be580) Data frame received for 3\nI0526 22:17:14.802953 3118 log.go:172] (0xc00095e0a0) (3) Data frame handling\nI0526 22:17:14.802991 3118 log.go:172] (0xc0009be580) Data frame received for 5\nI0526 22:17:14.803037 3118 log.go:172] (0xc0005edb80) (5) Data frame handling\nI0526 22:17:14.804735 3118 log.go:172] (0xc0009be580) Data frame received for 1\nI0526 22:17:14.804759 3118 log.go:172] (0xc00095e000) (1) Data frame handling\nI0526 22:17:14.804776 3118 log.go:172] (0xc00095e000) (1) Data frame sent\nI0526 22:17:14.804792 3118 log.go:172] (0xc0009be580) (0xc00095e000) Stream removed, broadcasting: 1\nI0526 22:17:14.804814 3118 log.go:172] (0xc0009be580) Go away received\nI0526 22:17:14.805447 3118 log.go:172] (0xc0009be580) (0xc00095e000) Stream removed, broadcasting: 1\nI0526 22:17:14.805473 3118 log.go:172] (0xc0009be580) (0xc00095e0a0) Stream removed, broadcasting: 3\nI0526 22:17:14.805485 3118 log.go:172] (0xc0009be580) (0xc0005edb80) Stream removed, broadcasting: 5\n" May 26 22:17:14.810: INFO: stdout: "" May 26 22:17:14.811: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-3543 execpoddl5tj -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 30640' May 26 22:17:15.043: INFO: stderr: "I0526 22:17:14.945882 3138 log.go:172] (0xc000104dc0) (0xc0006be000) Create stream\nI0526 22:17:14.945933 3138 log.go:172] (0xc000104dc0) (0xc0006be000) Stream added, broadcasting: 1\nI0526 22:17:14.955829 3138 log.go:172] (0xc000104dc0) Reply frame received for 1\nI0526 22:17:14.955878 3138 log.go:172] (0xc000104dc0) (0xc000665a40) Create stream\nI0526 22:17:14.955897 3138 log.go:172] (0xc000104dc0) (0xc000665a40) Stream added, broadcasting: 3\nI0526 22:17:14.957020 3138 log.go:172] (0xc000104dc0) Reply frame received for 3\nI0526 22:17:14.957049 3138 log.go:172] (0xc000104dc0) (0xc000665c20) Create stream\nI0526 22:17:14.957060 3138 log.go:172] (0xc000104dc0) (0xc000665c20) Stream added, broadcasting: 5\nI0526 22:17:14.959166 3138 log.go:172] (0xc000104dc0) Reply frame received for 5\nI0526 22:17:15.035555 3138 log.go:172] (0xc000104dc0) Data frame received for 3\nI0526 22:17:15.035610 3138 log.go:172] (0xc000665a40) (3) Data frame handling\nI0526 22:17:15.035645 3138 log.go:172] (0xc000104dc0) Data frame received for 5\nI0526 22:17:15.035664 3138 log.go:172] (0xc000665c20) (5) Data frame handling\nI0526 22:17:15.035678 3138 log.go:172] (0xc000665c20) (5) Data frame sent\nI0526 22:17:15.035693 3138 log.go:172] (0xc000104dc0) Data frame received for 5\nI0526 22:17:15.035706 3138 log.go:172] (0xc000665c20) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 30640\nConnection to 172.17.0.8 30640 port [tcp/30640] succeeded!\nI0526 22:17:15.036891 3138 log.go:172] (0xc000104dc0) Data frame received for 1\nI0526 22:17:15.036908 3138 log.go:172] (0xc0006be000) (1) Data frame handling\nI0526 22:17:15.036917 3138 log.go:172] (0xc0006be000) (1) Data frame sent\nI0526 22:17:15.036945 3138 log.go:172] (0xc000104dc0) (0xc0006be000) Stream removed, broadcasting: 1\nI0526 22:17:15.036973 3138 log.go:172] (0xc000104dc0) Go away received\nI0526 22:17:15.037618 3138 log.go:172] (0xc000104dc0) (0xc0006be000) Stream removed, broadcasting: 1\nI0526 22:17:15.037651 3138 log.go:172] (0xc000104dc0) (0xc000665a40) Stream removed, broadcasting: 3\nI0526 22:17:15.037664 3138 log.go:172] (0xc000104dc0) (0xc000665c20) Stream removed, broadcasting: 5\n" May 26 22:17:15.043: INFO: stdout: "" [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:17:15.043: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-3543" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.250 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to create a functioning NodePort service [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to create a functioning NodePort service [Conformance]","total":278,"completed":256,"skipped":3996,"failed":0} SSSSSSS ------------------------------ [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:17:15.052: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename subpath STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:37 STEP: Setting up data [It] should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating pod pod-subpath-test-configmap-wcpt STEP: Creating a pod to test atomic-volume-subpath May 26 22:17:15.182: INFO: Waiting up to 5m0s for pod "pod-subpath-test-configmap-wcpt" in namespace "subpath-4656" to be "success or failure" May 26 22:17:15.185: INFO: Pod "pod-subpath-test-configmap-wcpt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.995479ms May 26 22:17:17.190: INFO: Pod "pod-subpath-test-configmap-wcpt": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007564206s May 26 22:17:19.195: INFO: Pod "pod-subpath-test-configmap-wcpt": Phase="Running", Reason="", readiness=true. Elapsed: 4.012209366s May 26 22:17:21.199: INFO: Pod "pod-subpath-test-configmap-wcpt": Phase="Running", Reason="", readiness=true. Elapsed: 6.016856398s May 26 22:17:23.202: INFO: Pod "pod-subpath-test-configmap-wcpt": Phase="Running", Reason="", readiness=true. Elapsed: 8.019500303s May 26 22:17:25.206: INFO: Pod "pod-subpath-test-configmap-wcpt": Phase="Running", Reason="", readiness=true. Elapsed: 10.024090402s May 26 22:17:27.211: INFO: Pod "pod-subpath-test-configmap-wcpt": Phase="Running", Reason="", readiness=true. Elapsed: 12.028894812s May 26 22:17:29.215: INFO: Pod "pod-subpath-test-configmap-wcpt": Phase="Running", Reason="", readiness=true. Elapsed: 14.032820289s May 26 22:17:31.219: INFO: Pod "pod-subpath-test-configmap-wcpt": Phase="Running", Reason="", readiness=true. Elapsed: 16.036586117s May 26 22:17:33.228: INFO: Pod "pod-subpath-test-configmap-wcpt": Phase="Running", Reason="", readiness=true. Elapsed: 18.045463953s May 26 22:17:35.232: INFO: Pod "pod-subpath-test-configmap-wcpt": Phase="Running", Reason="", readiness=true. Elapsed: 20.049766178s May 26 22:17:37.236: INFO: Pod "pod-subpath-test-configmap-wcpt": Phase="Running", Reason="", readiness=true. Elapsed: 22.053595877s May 26 22:17:39.240: INFO: Pod "pod-subpath-test-configmap-wcpt": Phase="Succeeded", Reason="", readiness=false. Elapsed: 24.057630042s STEP: Saw pod success May 26 22:17:39.240: INFO: Pod "pod-subpath-test-configmap-wcpt" satisfied condition "success or failure" May 26 22:17:39.243: INFO: Trying to get logs from node jerma-worker2 pod pod-subpath-test-configmap-wcpt container test-container-subpath-configmap-wcpt: STEP: delete the pod May 26 22:17:39.284: INFO: Waiting for pod pod-subpath-test-configmap-wcpt to disappear May 26 22:17:39.287: INFO: Pod pod-subpath-test-configmap-wcpt no longer exists STEP: Deleting pod pod-subpath-test-configmap-wcpt May 26 22:17:39.287: INFO: Deleting pod "pod-subpath-test-configmap-wcpt" in namespace "subpath-4656" [AfterEach] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:17:39.293: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "subpath-4656" for this suite. • [SLOW TEST:24.248 seconds] [sig-storage] Subpath /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/utils/framework.go:23 Atomic writer volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/storage/subpath.go:33 should support subpaths with configmap pod [LinuxOnly] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Subpath Atomic writer volumes should support subpaths with configmap pod [LinuxOnly] [Conformance]","total":278,"completed":257,"skipped":4003,"failed":0} SSSSSSSSSS ------------------------------ [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:17:39.299: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename services STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:139 [It] should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a service externalname-service with the type=ExternalName in namespace services-7900 STEP: changing the ExternalName service to type=NodePort STEP: creating replication controller externalname-service in namespace services-7900 I0526 22:17:39.543089 6 runners.go:189] Created replication controller with name: externalname-service, namespace: services-7900, replica count: 2 I0526 22:17:42.593550 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 0 running, 2 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady I0526 22:17:45.593830 6 runners.go:189] externalname-service Pods: 2 out of 2 created, 2 running, 0 pending, 0 waiting, 0 inactive, 0 terminating, 0 unknown, 0 runningButNotReady May 26 22:17:45.593: INFO: Creating new exec pod May 26 22:17:50.663: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7900 execpod5dpqp -- /bin/sh -x -c nc -zv -t -w 2 externalname-service 80' May 26 22:17:50.898: INFO: stderr: "I0526 22:17:50.797373 3160 log.go:172] (0xc0009eab00) (0xc0007fa000) Create stream\nI0526 22:17:50.797451 3160 log.go:172] (0xc0009eab00) (0xc0007fa000) Stream added, broadcasting: 1\nI0526 22:17:50.800419 3160 log.go:172] (0xc0009eab00) Reply frame received for 1\nI0526 22:17:50.800465 3160 log.go:172] (0xc0009eab00) (0xc00084e000) Create stream\nI0526 22:17:50.800481 3160 log.go:172] (0xc0009eab00) (0xc00084e000) Stream added, broadcasting: 3\nI0526 22:17:50.801873 3160 log.go:172] (0xc0009eab00) Reply frame received for 3\nI0526 22:17:50.801921 3160 log.go:172] (0xc0009eab00) (0xc00085a000) Create stream\nI0526 22:17:50.801939 3160 log.go:172] (0xc0009eab00) (0xc00085a000) Stream added, broadcasting: 5\nI0526 22:17:50.803019 3160 log.go:172] (0xc0009eab00) Reply frame received for 5\nI0526 22:17:50.891116 3160 log.go:172] (0xc0009eab00) Data frame received for 5\nI0526 22:17:50.891157 3160 log.go:172] (0xc00085a000) (5) Data frame handling\nI0526 22:17:50.891166 3160 log.go:172] (0xc00085a000) (5) Data frame sent\nI0526 22:17:50.891171 3160 log.go:172] (0xc0009eab00) Data frame received for 5\nI0526 22:17:50.891176 3160 log.go:172] (0xc00085a000) (5) Data frame handling\n+ nc -zv -t -w 2 externalname-service 80\nConnection to externalname-service 80 port [tcp/http] succeeded!\nI0526 22:17:50.891195 3160 log.go:172] (0xc0009eab00) Data frame received for 3\nI0526 22:17:50.891201 3160 log.go:172] (0xc00084e000) (3) Data frame handling\nI0526 22:17:50.892587 3160 log.go:172] (0xc0009eab00) Data frame received for 1\nI0526 22:17:50.892608 3160 log.go:172] (0xc0007fa000) (1) Data frame handling\nI0526 22:17:50.892622 3160 log.go:172] (0xc0007fa000) (1) Data frame sent\nI0526 22:17:50.892664 3160 log.go:172] (0xc0009eab00) (0xc0007fa000) Stream removed, broadcasting: 1\nI0526 22:17:50.892681 3160 log.go:172] (0xc0009eab00) Go away received\nI0526 22:17:50.893028 3160 log.go:172] (0xc0009eab00) (0xc0007fa000) Stream removed, broadcasting: 1\nI0526 22:17:50.893058 3160 log.go:172] (0xc0009eab00) (0xc00084e000) Stream removed, broadcasting: 3\nI0526 22:17:50.893067 3160 log.go:172] (0xc0009eab00) (0xc00085a000) Stream removed, broadcasting: 5\n" May 26 22:17:50.898: INFO: stdout: "" May 26 22:17:50.899: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7900 execpod5dpqp -- /bin/sh -x -c nc -zv -t -w 2 10.109.187.132 80' May 26 22:17:51.110: INFO: stderr: "I0526 22:17:51.025622 3179 log.go:172] (0xc000118dc0) (0xc000ace000) Create stream\nI0526 22:17:51.025664 3179 log.go:172] (0xc000118dc0) (0xc000ace000) Stream added, broadcasting: 1\nI0526 22:17:51.027363 3179 log.go:172] (0xc000118dc0) Reply frame received for 1\nI0526 22:17:51.027396 3179 log.go:172] (0xc000118dc0) (0xc000691a40) Create stream\nI0526 22:17:51.027407 3179 log.go:172] (0xc000118dc0) (0xc000691a40) Stream added, broadcasting: 3\nI0526 22:17:51.028105 3179 log.go:172] (0xc000118dc0) Reply frame received for 3\nI0526 22:17:51.028128 3179 log.go:172] (0xc000118dc0) (0xc000691c20) Create stream\nI0526 22:17:51.028135 3179 log.go:172] (0xc000118dc0) (0xc000691c20) Stream added, broadcasting: 5\nI0526 22:17:51.028752 3179 log.go:172] (0xc000118dc0) Reply frame received for 5\nI0526 22:17:51.101968 3179 log.go:172] (0xc000118dc0) Data frame received for 5\nI0526 22:17:51.102006 3179 log.go:172] (0xc000691c20) (5) Data frame handling\n+ nc -zv -t -w 2 10.109.187.132 80\nConnection to 10.109.187.132 80 port [tcp/http] succeeded!\nI0526 22:17:51.102030 3179 log.go:172] (0xc000118dc0) Data frame received for 3\nI0526 22:17:51.102062 3179 log.go:172] (0xc000691a40) (3) Data frame handling\nI0526 22:17:51.102121 3179 log.go:172] (0xc000691c20) (5) Data frame sent\nI0526 22:17:51.102161 3179 log.go:172] (0xc000118dc0) Data frame received for 5\nI0526 22:17:51.102173 3179 log.go:172] (0xc000691c20) (5) Data frame handling\nI0526 22:17:51.103753 3179 log.go:172] (0xc000118dc0) Data frame received for 1\nI0526 22:17:51.103769 3179 log.go:172] (0xc000ace000) (1) Data frame handling\nI0526 22:17:51.103780 3179 log.go:172] (0xc000ace000) (1) Data frame sent\nI0526 22:17:51.103791 3179 log.go:172] (0xc000118dc0) (0xc000ace000) Stream removed, broadcasting: 1\nI0526 22:17:51.103835 3179 log.go:172] (0xc000118dc0) Go away received\nI0526 22:17:51.104083 3179 log.go:172] (0xc000118dc0) (0xc000ace000) Stream removed, broadcasting: 1\nI0526 22:17:51.104097 3179 log.go:172] (0xc000118dc0) (0xc000691a40) Stream removed, broadcasting: 3\nI0526 22:17:51.104107 3179 log.go:172] (0xc000118dc0) (0xc000691c20) Stream removed, broadcasting: 5\n" May 26 22:17:51.110: INFO: stdout: "" May 26 22:17:51.110: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7900 execpod5dpqp -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.10 32302' May 26 22:17:51.322: INFO: stderr: "I0526 22:17:51.235763 3202 log.go:172] (0xc000104a50) (0xc0005d3b80) Create stream\nI0526 22:17:51.235821 3202 log.go:172] (0xc000104a50) (0xc0005d3b80) Stream added, broadcasting: 1\nI0526 22:17:51.238596 3202 log.go:172] (0xc000104a50) Reply frame received for 1\nI0526 22:17:51.238662 3202 log.go:172] (0xc000104a50) (0xc0005d3d60) Create stream\nI0526 22:17:51.238693 3202 log.go:172] (0xc000104a50) (0xc0005d3d60) Stream added, broadcasting: 3\nI0526 22:17:51.239917 3202 log.go:172] (0xc000104a50) Reply frame received for 3\nI0526 22:17:51.239965 3202 log.go:172] (0xc000104a50) (0xc0005d3e00) Create stream\nI0526 22:17:51.239994 3202 log.go:172] (0xc000104a50) (0xc0005d3e00) Stream added, broadcasting: 5\nI0526 22:17:51.241065 3202 log.go:172] (0xc000104a50) Reply frame received for 5\nI0526 22:17:51.314607 3202 log.go:172] (0xc000104a50) Data frame received for 5\nI0526 22:17:51.314651 3202 log.go:172] (0xc0005d3e00) (5) Data frame handling\nI0526 22:17:51.314682 3202 log.go:172] (0xc0005d3e00) (5) Data frame sent\n+ nc -zv -t -w 2 172.17.0.10 32302\nConnection to 172.17.0.10 32302 port [tcp/32302] succeeded!\nI0526 22:17:51.314809 3202 log.go:172] (0xc000104a50) Data frame received for 5\nI0526 22:17:51.314848 3202 log.go:172] (0xc0005d3e00) (5) Data frame handling\nI0526 22:17:51.315029 3202 log.go:172] (0xc000104a50) Data frame received for 3\nI0526 22:17:51.315042 3202 log.go:172] (0xc0005d3d60) (3) Data frame handling\nI0526 22:17:51.316359 3202 log.go:172] (0xc000104a50) Data frame received for 1\nI0526 22:17:51.316383 3202 log.go:172] (0xc0005d3b80) (1) Data frame handling\nI0526 22:17:51.316399 3202 log.go:172] (0xc0005d3b80) (1) Data frame sent\nI0526 22:17:51.316413 3202 log.go:172] (0xc000104a50) (0xc0005d3b80) Stream removed, broadcasting: 1\nI0526 22:17:51.316428 3202 log.go:172] (0xc000104a50) Go away received\nI0526 22:17:51.316731 3202 log.go:172] (0xc000104a50) (0xc0005d3b80) Stream removed, broadcasting: 1\nI0526 22:17:51.316746 3202 log.go:172] (0xc000104a50) (0xc0005d3d60) Stream removed, broadcasting: 3\nI0526 22:17:51.316753 3202 log.go:172] (0xc000104a50) (0xc0005d3e00) Stream removed, broadcasting: 5\n" May 26 22:17:51.322: INFO: stdout: "" May 26 22:17:51.322: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=services-7900 execpod5dpqp -- /bin/sh -x -c nc -zv -t -w 2 172.17.0.8 32302' May 26 22:17:51.547: INFO: stderr: "I0526 22:17:51.455491 3223 log.go:172] (0xc0007e8a50) (0xc0007c8140) Create stream\nI0526 22:17:51.455567 3223 log.go:172] (0xc0007e8a50) (0xc0007c8140) Stream added, broadcasting: 1\nI0526 22:17:51.459591 3223 log.go:172] (0xc0007e8a50) Reply frame received for 1\nI0526 22:17:51.459645 3223 log.go:172] (0xc0007e8a50) (0xc0002d46e0) Create stream\nI0526 22:17:51.459664 3223 log.go:172] (0xc0007e8a50) (0xc0002d46e0) Stream added, broadcasting: 3\nI0526 22:17:51.460741 3223 log.go:172] (0xc0007e8a50) Reply frame received for 3\nI0526 22:17:51.460781 3223 log.go:172] (0xc0007e8a50) (0xc0007c81e0) Create stream\nI0526 22:17:51.460793 3223 log.go:172] (0xc0007e8a50) (0xc0007c81e0) Stream added, broadcasting: 5\nI0526 22:17:51.462096 3223 log.go:172] (0xc0007e8a50) Reply frame received for 5\nI0526 22:17:51.538827 3223 log.go:172] (0xc0007e8a50) Data frame received for 3\nI0526 22:17:51.538869 3223 log.go:172] (0xc0002d46e0) (3) Data frame handling\nI0526 22:17:51.538896 3223 log.go:172] (0xc0007e8a50) Data frame received for 5\nI0526 22:17:51.538907 3223 log.go:172] (0xc0007c81e0) (5) Data frame handling\nI0526 22:17:51.538918 3223 log.go:172] (0xc0007c81e0) (5) Data frame sent\nI0526 22:17:51.538928 3223 log.go:172] (0xc0007e8a50) Data frame received for 5\nI0526 22:17:51.538937 3223 log.go:172] (0xc0007c81e0) (5) Data frame handling\n+ nc -zv -t -w 2 172.17.0.8 32302\nConnection to 172.17.0.8 32302 port [tcp/32302] succeeded!\nI0526 22:17:51.540175 3223 log.go:172] (0xc0007e8a50) Data frame received for 1\nI0526 22:17:51.540206 3223 log.go:172] (0xc0007c8140) (1) Data frame handling\nI0526 22:17:51.540235 3223 log.go:172] (0xc0007c8140) (1) Data frame sent\nI0526 22:17:51.540269 3223 log.go:172] (0xc0007e8a50) (0xc0007c8140) Stream removed, broadcasting: 1\nI0526 22:17:51.540293 3223 log.go:172] (0xc0007e8a50) Go away received\nI0526 22:17:51.540758 3223 log.go:172] (0xc0007e8a50) (0xc0007c8140) Stream removed, broadcasting: 1\nI0526 22:17:51.540777 3223 log.go:172] (0xc0007e8a50) (0xc0002d46e0) Stream removed, broadcasting: 3\nI0526 22:17:51.540787 3223 log.go:172] (0xc0007e8a50) (0xc0007c81e0) Stream removed, broadcasting: 5\n" May 26 22:17:51.547: INFO: stdout: "" May 26 22:17:51.547: INFO: Cleaning up the ExternalName to NodePort test service [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:17:51.630: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "services-7900" for this suite. [AfterEach] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/service.go:143 • [SLOW TEST:12.346 seconds] [sig-network] Services /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/network/framework.go:23 should be able to change the type from ExternalName to NodePort [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-network] Services should be able to change the type from ExternalName to NodePort [Conformance]","total":278,"completed":258,"skipped":4013,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:17:51.646: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap with name cm-test-opt-del-8b06ebcd-4603-417e-8940-f63f7dddfed7 STEP: Creating configMap with name cm-test-opt-upd-2e5c9b40-ea19-495a-9b13-9403d08774a1 STEP: Creating the pod STEP: Deleting configmap cm-test-opt-del-8b06ebcd-4603-417e-8940-f63f7dddfed7 STEP: Updating configmap cm-test-opt-upd-2e5c9b40-ea19-495a-9b13-9403d08774a1 STEP: Creating configMap with name cm-test-opt-create-b761160a-164a-4da1-99ac-66cacb7b9b59 STEP: waiting to observe update in volume [AfterEach] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:18:01.957: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-4518" for this suite. • [SLOW TEST:10.319 seconds] [sig-storage] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/configmap_volume.go:33 optional updates should be reflected in volume [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] ConfigMap optional updates should be reflected in volume [NodeConformance] [Conformance]","total":278,"completed":259,"skipped":4068,"failed":0} SSS ------------------------------ [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:18:01.965: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check is all data is printed [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 22:18:02.019: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config version' May 26 22:18:02.172: INFO: stderr: "" May 26 22:18:02.172: INFO: stdout: "Client Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.4\", GitCommit:\"8d8aa39598534325ad77120c120a22b3a990b5ea\", GitTreeState:\"clean\", BuildDate:\"2020-05-06T19:23:43Z\", GoVersion:\"go1.13.10\", Compiler:\"gc\", Platform:\"linux/amd64\"}\nServer Version: version.Info{Major:\"1\", Minor:\"17\", GitVersion:\"v1.17.2\", GitCommit:\"59603c6e503c87169aea6106f57b9f242f64df89\", GitTreeState:\"clean\", BuildDate:\"2020-02-07T01:05:17Z\", GoVersion:\"go1.13.5\", Compiler:\"gc\", Platform:\"linux/amd64\"}\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:18:02.172: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-3406" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl version should check is all data is printed [Conformance]","total":278,"completed":260,"skipped":4071,"failed":0} SSSSS ------------------------------ [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:18:02.182: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename container-lifecycle-hook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:64 STEP: create the container to handle the HTTPGet hook request. [It] should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: create the pod with lifecycle hook STEP: delete the pod with lifecycle hook May 26 22:18:12.371: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 22:18:12.429: INFO: Pod pod-with-prestop-exec-hook still exists May 26 22:18:14.429: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 22:18:14.432: INFO: Pod pod-with-prestop-exec-hook still exists May 26 22:18:16.429: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 22:18:16.433: INFO: Pod pod-with-prestop-exec-hook still exists May 26 22:18:18.429: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 22:18:18.433: INFO: Pod pod-with-prestop-exec-hook still exists May 26 22:18:20.429: INFO: Waiting for pod pod-with-prestop-exec-hook to disappear May 26 22:18:20.433: INFO: Pod pod-with-prestop-exec-hook no longer exists STEP: check prestop hook [AfterEach] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:18:20.456: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "container-lifecycle-hook-3712" for this suite. • [SLOW TEST:18.282 seconds] [k8s.io] Container Lifecycle Hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 when create a pod with lifecycle hook /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/lifecycle_hook.go:42 should execute prestop exec hook properly [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [k8s.io] Container Lifecycle Hook when create a pod with lifecycle hook should execute prestop exec hook properly [NodeConformance] [Conformance]","total":278,"completed":261,"skipped":4076,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:18:20.465: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:125 STEP: Setting up server cert STEP: Create role binding to let cr conversion webhook read extension-apiserver-authentication STEP: Deploying the custom resource conversion webhook pod STEP: Wait for the deployment to be ready May 26 22:18:20.834: INFO: deployment "sample-crd-conversion-webhook-deployment" doesn't have the required revision set May 26 22:18:22.843: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128300, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128300, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128300, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128300, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 22:18:24.896: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128300, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128300, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128300, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128300, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-crd-conversion-webhook-deployment-78dcf5dd84\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 22:18:27.880: INFO: Waiting for amount of service:e2e-test-crd-conversion-webhook endpoints to be 1 [It] should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 22:18:27.884: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating a v1 custom resource STEP: Create a v2 custom resource STEP: List CRs in v1 STEP: List CRs in v2 [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:18:29.129: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-webhook-848" for this suite. [AfterEach] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_conversion_webhook.go:136 • [SLOW TEST:8.819 seconds] [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to convert a non homogeneous list of CRs [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceConversionWebhook [Privileged:ClusterAdmin] should be able to convert a non homogeneous list of CRs [Conformance]","total":278,"completed":262,"skipped":4115,"failed":0} S ------------------------------ [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:18:29.285: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow substituting values in a container's args [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test substitution in container's args May 26 22:18:29.365: INFO: Waiting up to 5m0s for pod "var-expansion-5a78809d-f275-4b7c-9f0b-3efb6a0e479b" in namespace "var-expansion-8656" to be "success or failure" May 26 22:18:29.379: INFO: Pod "var-expansion-5a78809d-f275-4b7c-9f0b-3efb6a0e479b": Phase="Pending", Reason="", readiness=false. Elapsed: 14.271832ms May 26 22:18:31.385: INFO: Pod "var-expansion-5a78809d-f275-4b7c-9f0b-3efb6a0e479b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.020040065s May 26 22:18:33.389: INFO: Pod "var-expansion-5a78809d-f275-4b7c-9f0b-3efb6a0e479b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.02426954s STEP: Saw pod success May 26 22:18:33.389: INFO: Pod "var-expansion-5a78809d-f275-4b7c-9f0b-3efb6a0e479b" satisfied condition "success or failure" May 26 22:18:33.392: INFO: Trying to get logs from node jerma-worker2 pod var-expansion-5a78809d-f275-4b7c-9f0b-3efb6a0e479b container dapi-container: STEP: delete the pod May 26 22:18:33.417: INFO: Waiting for pod var-expansion-5a78809d-f275-4b7c-9f0b-3efb6a0e479b to disappear May 26 22:18:33.421: INFO: Pod var-expansion-5a78809d-f275-4b7c-9f0b-3efb6a0e479b no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:18:33.421: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-8656" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow substituting values in a container's args [NodeConformance] [Conformance]","total":278,"completed":263,"skipped":4116,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:18:33.428: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename resourcequota STEP: Waiting for a default service account to be provisioned in namespace [It] should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Counting existing ResourceQuota STEP: Creating a ResourceQuota STEP: Ensuring resource quota status is calculated STEP: Creating a ReplicationController STEP: Ensuring resource quota status captures replication controller creation STEP: Deleting a ReplicationController STEP: Ensuring resource quota status released usage [AfterEach] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:18:44.631: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "resourcequota-5999" for this suite. • [SLOW TEST:11.210 seconds] [sig-api-machinery] ResourceQuota /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should create a ResourceQuota and capture the life of a replication controller. [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] ResourceQuota should create a ResourceQuota and capture the life of a replication controller. [Conformance]","total":278,"completed":264,"skipped":4147,"failed":0} SSSSSSSSSSSSSS ------------------------------ [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:18:44.638: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename var-expansion STEP: Waiting for a default service account to be provisioned in namespace [It] should allow composing env vars into new env vars [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test env composition May 26 22:18:44.760: INFO: Waiting up to 5m0s for pod "var-expansion-5937422e-437a-47ba-96f4-7055ff3fde33" in namespace "var-expansion-4994" to be "success or failure" May 26 22:18:44.860: INFO: Pod "var-expansion-5937422e-437a-47ba-96f4-7055ff3fde33": Phase="Pending", Reason="", readiness=false. Elapsed: 100.520428ms May 26 22:18:46.874: INFO: Pod "var-expansion-5937422e-437a-47ba-96f4-7055ff3fde33": Phase="Pending", Reason="", readiness=false. Elapsed: 2.114544854s May 26 22:18:48.879: INFO: Pod "var-expansion-5937422e-437a-47ba-96f4-7055ff3fde33": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.119241346s STEP: Saw pod success May 26 22:18:48.879: INFO: Pod "var-expansion-5937422e-437a-47ba-96f4-7055ff3fde33" satisfied condition "success or failure" May 26 22:18:48.883: INFO: Trying to get logs from node jerma-worker pod var-expansion-5937422e-437a-47ba-96f4-7055ff3fde33 container dapi-container: STEP: delete the pod May 26 22:18:49.097: INFO: Waiting for pod var-expansion-5937422e-437a-47ba-96f4-7055ff3fde33 to disappear May 26 22:18:49.158: INFO: Pod var-expansion-5937422e-437a-47ba-96f4-7055ff3fde33 no longer exists [AfterEach] [k8s.io] Variable Expansion /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:18:49.158: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "var-expansion-4994" for this suite. •{"msg":"PASSED [k8s.io] Variable Expansion should allow composing env vars into new env vars [NodeConformance] [Conformance]","total":278,"completed":265,"skipped":4161,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:18:49.240: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 22:18:50.130: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 22:18:52.140: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128330, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128330, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128330, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128330, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} May 26 22:18:54.145: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128330, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128330, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128330, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128330, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 22:18:57.180: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering a validating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Registering a mutating webhook on ValidatingWebhookConfiguration and MutatingWebhookConfiguration objects, via the AdmissionRegistration API STEP: Creating a dummy validating-webhook-configuration object STEP: Deleting the validating-webhook-configuration, which should be possible to remove STEP: Creating a dummy mutating-webhook-configuration object STEP: Deleting the mutating-webhook-configuration, which should be possible to remove [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:18:57.370: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-6322" for this suite. STEP: Destroying namespace "webhook-6322-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:8.321 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should not be able to mutate or prevent deletion of webhook configuration objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should not be able to mutate or prevent deletion of webhook configuration objects [Conformance]","total":278,"completed":266,"skipped":4218,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:18:57.562: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [BeforeEach] Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:324 [It] should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: creating a replication controller May 26 22:18:57.613: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config create -f - --namespace=kubectl-1961' May 26 22:18:58.852: INFO: stderr: "" May 26 22:18:58.852: INFO: stdout: "replicationcontroller/update-demo-nautilus created\n" STEP: waiting for all containers in name=update-demo pods to come up. May 26 22:18:58.852: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1961' May 26 22:18:59.007: INFO: stderr: "" May 26 22:18:59.007: INFO: stdout: "update-demo-nautilus-4vqcl update-demo-nautilus-ng8pt " May 26 22:18:59.007: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vqcl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1961' May 26 22:18:59.128: INFO: stderr: "" May 26 22:18:59.128: INFO: stdout: "" May 26 22:18:59.128: INFO: update-demo-nautilus-4vqcl is created but not running May 26 22:19:04.128: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1961' May 26 22:19:04.243: INFO: stderr: "" May 26 22:19:04.243: INFO: stdout: "update-demo-nautilus-4vqcl update-demo-nautilus-ng8pt " May 26 22:19:04.243: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vqcl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1961' May 26 22:19:04.358: INFO: stderr: "" May 26 22:19:04.358: INFO: stdout: "true" May 26 22:19:04.359: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vqcl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1961' May 26 22:19:04.453: INFO: stderr: "" May 26 22:19:04.453: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 22:19:04.453: INFO: validating pod update-demo-nautilus-4vqcl May 26 22:19:04.457: INFO: got data: { "image": "nautilus.jpg" } May 26 22:19:04.457: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 22:19:04.457: INFO: update-demo-nautilus-4vqcl is verified up and running May 26 22:19:04.458: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ng8pt -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1961' May 26 22:19:04.553: INFO: stderr: "" May 26 22:19:04.553: INFO: stdout: "true" May 26 22:19:04.554: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-ng8pt -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1961' May 26 22:19:04.643: INFO: stderr: "" May 26 22:19:04.644: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 22:19:04.644: INFO: validating pod update-demo-nautilus-ng8pt May 26 22:19:04.662: INFO: got data: { "image": "nautilus.jpg" } May 26 22:19:04.662: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 22:19:04.662: INFO: update-demo-nautilus-ng8pt is verified up and running STEP: scaling down the replication controller May 26 22:19:04.664: INFO: scanned /root for discovery docs: May 26 22:19:04.664: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=1 --timeout=5m --namespace=kubectl-1961' May 26 22:19:05.797: INFO: stderr: "" May 26 22:19:05.797: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 26 22:19:05.797: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1961' May 26 22:19:05.934: INFO: stderr: "" May 26 22:19:05.934: INFO: stdout: "update-demo-nautilus-4vqcl update-demo-nautilus-ng8pt " STEP: Replicas for name=update-demo: expected=1 actual=2 May 26 22:19:10.934: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1961' May 26 22:19:11.040: INFO: stderr: "" May 26 22:19:11.040: INFO: stdout: "update-demo-nautilus-4vqcl " May 26 22:19:11.041: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vqcl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1961' May 26 22:19:11.137: INFO: stderr: "" May 26 22:19:11.137: INFO: stdout: "true" May 26 22:19:11.137: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vqcl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1961' May 26 22:19:11.240: INFO: stderr: "" May 26 22:19:11.240: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 22:19:11.240: INFO: validating pod update-demo-nautilus-4vqcl May 26 22:19:11.244: INFO: got data: { "image": "nautilus.jpg" } May 26 22:19:11.244: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 22:19:11.244: INFO: update-demo-nautilus-4vqcl is verified up and running STEP: scaling up the replication controller May 26 22:19:11.245: INFO: scanned /root for discovery docs: May 26 22:19:11.245: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config scale rc update-demo-nautilus --replicas=2 --timeout=5m --namespace=kubectl-1961' May 26 22:19:12.358: INFO: stderr: "" May 26 22:19:12.358: INFO: stdout: "replicationcontroller/update-demo-nautilus scaled\n" STEP: waiting for all containers in name=update-demo pods to come up. May 26 22:19:12.358: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1961' May 26 22:19:12.464: INFO: stderr: "" May 26 22:19:12.464: INFO: stdout: "update-demo-nautilus-4vqcl update-demo-nautilus-jkr6w " May 26 22:19:12.464: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vqcl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1961' May 26 22:19:12.558: INFO: stderr: "" May 26 22:19:12.558: INFO: stdout: "true" May 26 22:19:12.558: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vqcl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1961' May 26 22:19:12.661: INFO: stderr: "" May 26 22:19:12.661: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 22:19:12.661: INFO: validating pod update-demo-nautilus-4vqcl May 26 22:19:12.665: INFO: got data: { "image": "nautilus.jpg" } May 26 22:19:12.665: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 22:19:12.665: INFO: update-demo-nautilus-4vqcl is verified up and running May 26 22:19:12.665: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jkr6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1961' May 26 22:19:12.759: INFO: stderr: "" May 26 22:19:12.759: INFO: stdout: "" May 26 22:19:12.759: INFO: update-demo-nautilus-jkr6w is created but not running May 26 22:19:17.760: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -o template --template={{range.items}}{{.metadata.name}} {{end}} -l name=update-demo --namespace=kubectl-1961' May 26 22:19:17.862: INFO: stderr: "" May 26 22:19:17.862: INFO: stdout: "update-demo-nautilus-4vqcl update-demo-nautilus-jkr6w " May 26 22:19:17.862: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vqcl -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1961' May 26 22:19:17.968: INFO: stderr: "" May 26 22:19:17.968: INFO: stdout: "true" May 26 22:19:17.968: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-4vqcl -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1961' May 26 22:19:18.077: INFO: stderr: "" May 26 22:19:18.077: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 22:19:18.077: INFO: validating pod update-demo-nautilus-4vqcl May 26 22:19:18.088: INFO: got data: { "image": "nautilus.jpg" } May 26 22:19:18.088: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 22:19:18.088: INFO: update-demo-nautilus-4vqcl is verified up and running May 26 22:19:18.088: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jkr6w -o template --template={{if (exists . "status" "containerStatuses")}}{{range .status.containerStatuses}}{{if (and (eq .name "update-demo") (exists . "state" "running"))}}true{{end}}{{end}}{{end}} --namespace=kubectl-1961' May 26 22:19:18.177: INFO: stderr: "" May 26 22:19:18.177: INFO: stdout: "true" May 26 22:19:18.177: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods update-demo-nautilus-jkr6w -o template --template={{if (exists . "spec" "containers")}}{{range .spec.containers}}{{if eq .name "update-demo"}}{{.image}}{{end}}{{end}}{{end}} --namespace=kubectl-1961' May 26 22:19:18.270: INFO: stderr: "" May 26 22:19:18.270: INFO: stdout: "gcr.io/kubernetes-e2e-test-images/nautilus:1.0" May 26 22:19:18.270: INFO: validating pod update-demo-nautilus-jkr6w May 26 22:19:18.273: INFO: got data: { "image": "nautilus.jpg" } May 26 22:19:18.273: INFO: Unmarshalled json jpg/img => {nautilus.jpg} , expecting nautilus.jpg . May 26 22:19:18.273: INFO: update-demo-nautilus-jkr6w is verified up and running STEP: using delete to clean up resources May 26 22:19:18.273: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config delete --grace-period=0 --force -f - --namespace=kubectl-1961' May 26 22:19:18.367: INFO: stderr: "warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely.\n" May 26 22:19:18.367: INFO: stdout: "replicationcontroller \"update-demo-nautilus\" force deleted\n" May 26 22:19:18.367: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get rc,svc -l name=update-demo --no-headers --namespace=kubectl-1961' May 26 22:19:18.455: INFO: stderr: "No resources found in kubectl-1961 namespace.\n" May 26 22:19:18.455: INFO: stdout: "" May 26 22:19:18.455: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config get pods -l name=update-demo --namespace=kubectl-1961 -o go-template={{ range .items }}{{ if not .metadata.deletionTimestamp }}{{ .metadata.name }}{{ "\n" }}{{ end }}{{ end }}' May 26 22:19:18.566: INFO: stderr: "" May 26 22:19:18.566: INFO: stdout: "" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:19:18.566: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-1961" for this suite. • [SLOW TEST:21.011 seconds] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/framework.go:23 Update Demo /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:322 should scale a replication controller [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-cli] Kubectl client Update Demo should scale a replication controller [Conformance]","total":278,"completed":267,"skipped":4251,"failed":0} SSSSSSSSSSSSSS ------------------------------ [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:19:18.573: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename webhook STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:87 STEP: Setting up server cert STEP: Create role binding to let webhook read extension-apiserver-authentication STEP: Deploying the webhook pod STEP: Wait for the deployment to be ready May 26 22:19:19.402: INFO: deployment "sample-webhook-deployment" doesn't have the required revision set May 26 22:19:21.414: INFO: deployment status: v1.DeploymentStatus{ObservedGeneration:1, Replicas:1, UpdatedReplicas:1, ReadyReplicas:0, AvailableReplicas:0, UnavailableReplicas:1, Conditions:[]v1.DeploymentCondition{v1.DeploymentCondition{Type:"Available", Status:"False", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128359, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128359, loc:(*time.Location)(0x78ee0c0)}}, Reason:"MinimumReplicasUnavailable", Message:"Deployment does not have minimum availability."}, v1.DeploymentCondition{Type:"Progressing", Status:"True", LastUpdateTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128359, loc:(*time.Location)(0x78ee0c0)}}, LastTransitionTime:v1.Time{Time:time.Time{wall:0x0, ext:63726128359, loc:(*time.Location)(0x78ee0c0)}}, Reason:"ReplicaSetUpdated", Message:"ReplicaSet \"sample-webhook-deployment-5f65f8c764\" is progressing."}}, CollisionCount:(*int32)(nil)} STEP: Deploying the webhook service STEP: Verifying the service has paired with the endpoint May 26 22:19:24.452: INFO: Waiting for amount of service:e2e-test-webhook endpoints to be 1 [It] should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Registering the webhook via the AdmissionRegistration API STEP: create a pod that should be denied by the webhook STEP: create a pod that causes the webhook to hang STEP: create a configmap that should be denied by the webhook STEP: create a configmap that should be admitted by the webhook STEP: update (PUT) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: update (PATCH) the admitted configmap to a non-compliant one should be rejected by the webhook STEP: create a namespace that bypass the webhook STEP: create a configmap that violates the webhook policy but is in a whitelisted namespace [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:19:34.646: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "webhook-8990" for this suite. STEP: Destroying namespace "webhook-8990-markers" for this suite. [AfterEach] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/webhook.go:102 • [SLOW TEST:16.224 seconds] [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 should be able to deny pod and configmap creation [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] AdmissionWebhook [Privileged:ClusterAdmin] should be able to deny pod and configmap creation [Conformance]","total":278,"completed":268,"skipped":4265,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:19:34.797: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating the pod May 26 22:19:39.440: INFO: Successfully updated pod "labelsupdatea69cd58b-1f26-4956-ae89-92fa490d2474" [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:19:43.466: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-7102" for this suite. • [SLOW TEST:8.678 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should update labels on modification [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should update labels on modification [NodeConformance] [Conformance]","total":278,"completed":269,"skipped":4281,"failed":0} SSS ------------------------------ [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:19:43.475: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir 0644 on node default medium May 26 22:19:43.598: INFO: Waiting up to 5m0s for pod "pod-071c4520-61b3-4015-99d7-cb13d912bff0" in namespace "emptydir-1829" to be "success or failure" May 26 22:19:43.631: INFO: Pod "pod-071c4520-61b3-4015-99d7-cb13d912bff0": Phase="Pending", Reason="", readiness=false. Elapsed: 33.230228ms May 26 22:19:45.634: INFO: Pod "pod-071c4520-61b3-4015-99d7-cb13d912bff0": Phase="Pending", Reason="", readiness=false. Elapsed: 2.036660949s May 26 22:19:47.642: INFO: Pod "pod-071c4520-61b3-4015-99d7-cb13d912bff0": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.044133284s STEP: Saw pod success May 26 22:19:47.642: INFO: Pod "pod-071c4520-61b3-4015-99d7-cb13d912bff0" satisfied condition "success or failure" May 26 22:19:47.645: INFO: Trying to get logs from node jerma-worker pod pod-071c4520-61b3-4015-99d7-cb13d912bff0 container test-container: STEP: delete the pod May 26 22:19:47.751: INFO: Waiting for pod pod-071c4520-61b3-4015-99d7-cb13d912bff0 to disappear May 26 22:19:47.766: INFO: Pod pod-071c4520-61b3-4015-99d7-cb13d912bff0 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:19:47.766: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-1829" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes should support (non-root,0644,default) [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":270,"skipped":4284,"failed":0} SS ------------------------------ [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:19:47.775: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename crd-watch STEP: Waiting for a default service account to be provisioned in namespace [It] watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 May 26 22:19:47.875: INFO: >>> kubeConfig: /root/.kube/config STEP: Creating first CR May 26 22:19:48.478: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-26T22:19:48Z generation:1 name:name1 resourceVersion:19397088 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0226e923-73e5-4521-82e2-abcc6732eb2c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Creating second CR May 26 22:19:58.484: INFO: Got : ADDED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-26T22:19:58Z generation:1 name:name2 resourceVersion:19397138 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1bde6e97-80b5-4cfd-a34a-1998e2401b72] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying first CR May 26 22:20:08.503: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-26T22:19:48Z generation:2 name:name1 resourceVersion:19397171 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0226e923-73e5-4521-82e2-abcc6732eb2c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Modifying second CR May 26 22:20:18.509: INFO: Got : MODIFIED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-26T22:19:58Z generation:2 name:name2 resourceVersion:19397200 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1bde6e97-80b5-4cfd-a34a-1998e2401b72] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting first CR May 26 22:20:28.518: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-26T22:19:48Z generation:2 name:name1 resourceVersion:19397230 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name1 uid:0226e923-73e5-4521-82e2-abcc6732eb2c] num:map[num1:9223372036854775807 num2:1000000]]} STEP: Deleting second CR May 26 22:20:38.528: INFO: Got : DELETED &{map[apiVersion:mygroup.example.com/v1beta1 content:map[key:value] dummy:test kind:WishIHadChosenNoxu metadata:map[creationTimestamp:2020-05-26T22:19:58Z generation:2 name:name2 resourceVersion:19397260 selfLink:/apis/mygroup.example.com/v1beta1/noxus/name2 uid:1bde6e97-80b5-4cfd-a34a-1998e2401b72] num:map[num1:9223372036854775807 num2:1000000]]} [AfterEach] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:20:49.039: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "crd-watch-5363" for this suite. • [SLOW TEST:61.274 seconds] [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/framework.go:23 CustomResourceDefinition Watch /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apimachinery/crd_watch.go:41 watch on custom resource definition objects [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-api-machinery] CustomResourceDefinition Watch [Privileged:ClusterAdmin] CustomResourceDefinition Watch watch on custom resource definition objects [Conformance]","total":278,"completed":271,"skipped":4286,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:20:49.049: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename kubectl STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/kubectl/kubectl.go:272 [It] should check if Kubernetes master services is included in cluster-info [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: validating cluster-info May 26 22:20:49.180: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config cluster-info' May 26 22:20:49.268: INFO: stderr: "" May 26 22:20:49.268: INFO: stdout: "\x1b[0;32mKubernetes master\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770\x1b[0m\n\x1b[0;32mKubeDNS\x1b[0m is running at \x1b[0;33mhttps://172.30.12.66:32770/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy\x1b[0m\n\nTo further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.\n" [AfterEach] [sig-cli] Kubectl client /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:20:49.268: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "kubectl-2682" for this suite. •{"msg":"PASSED [sig-cli] Kubectl client Kubectl cluster-info should check if Kubernetes master services is included in cluster-info [Conformance]","total":278,"completed":272,"skipped":4327,"failed":0} SSSSSSSSSSSSSSSSS ------------------------------ [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:20:49.290: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename configmap STEP: Waiting for a default service account to be provisioned in namespace [It] should fail to create ConfigMap with empty key [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating configMap that has name configmap-test-emptyKey-4a86cc35-49b7-4aa4-ab41-6e035f32e210 [AfterEach] [sig-node] ConfigMap /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:20:49.336: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "configmap-8479" for this suite. •{"msg":"PASSED [sig-node] ConfigMap should fail to create ConfigMap with empty key [Conformance]","total":278,"completed":273,"skipped":4344,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:20:49.370: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename statefulset STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:64 [BeforeEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:79 STEP: Creating service test in namespace statefulset-2194 [It] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Initializing watcher for selector baz=blah,foo=bar STEP: Creating stateful set ss in namespace statefulset-2194 STEP: Waiting until all stateful set ss replicas will be running in namespace statefulset-2194 May 26 22:20:49.514: INFO: Found 0 stateful pods, waiting for 1 May 26 22:20:59.519: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true STEP: Confirming that stateful set scale up will halt with unhealthy stateful pod May 26 22:20:59.523: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2194 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 22:20:59.790: INFO: stderr: "I0526 22:20:59.657894 3824 log.go:172] (0xc0006b3130) (0xc00069da40) Create stream\nI0526 22:20:59.657959 3824 log.go:172] (0xc0006b3130) (0xc00069da40) Stream added, broadcasting: 1\nI0526 22:20:59.660733 3824 log.go:172] (0xc0006b3130) Reply frame received for 1\nI0526 22:20:59.660769 3824 log.go:172] (0xc0006b3130) (0xc00069dc20) Create stream\nI0526 22:20:59.660781 3824 log.go:172] (0xc0006b3130) (0xc00069dc20) Stream added, broadcasting: 3\nI0526 22:20:59.662433 3824 log.go:172] (0xc0006b3130) Reply frame received for 3\nI0526 22:20:59.662486 3824 log.go:172] (0xc0006b3130) (0xc0008f4000) Create stream\nI0526 22:20:59.662503 3824 log.go:172] (0xc0006b3130) (0xc0008f4000) Stream added, broadcasting: 5\nI0526 22:20:59.663477 3824 log.go:172] (0xc0006b3130) Reply frame received for 5\nI0526 22:20:59.738693 3824 log.go:172] (0xc0006b3130) Data frame received for 5\nI0526 22:20:59.738730 3824 log.go:172] (0xc0008f4000) (5) Data frame handling\nI0526 22:20:59.738752 3824 log.go:172] (0xc0008f4000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 22:20:59.782374 3824 log.go:172] (0xc0006b3130) Data frame received for 3\nI0526 22:20:59.782396 3824 log.go:172] (0xc00069dc20) (3) Data frame handling\nI0526 22:20:59.782404 3824 log.go:172] (0xc00069dc20) (3) Data frame sent\nI0526 22:20:59.782412 3824 log.go:172] (0xc0006b3130) Data frame received for 3\nI0526 22:20:59.782419 3824 log.go:172] (0xc00069dc20) (3) Data frame handling\nI0526 22:20:59.782651 3824 log.go:172] (0xc0006b3130) Data frame received for 5\nI0526 22:20:59.782668 3824 log.go:172] (0xc0008f4000) (5) Data frame handling\nI0526 22:20:59.784762 3824 log.go:172] (0xc0006b3130) Data frame received for 1\nI0526 22:20:59.784778 3824 log.go:172] (0xc00069da40) (1) Data frame handling\nI0526 22:20:59.784789 3824 log.go:172] (0xc00069da40) (1) Data frame sent\nI0526 22:20:59.784801 3824 log.go:172] (0xc0006b3130) (0xc00069da40) Stream removed, broadcasting: 1\nI0526 22:20:59.785019 3824 log.go:172] (0xc0006b3130) (0xc00069da40) Stream removed, broadcasting: 1\nI0526 22:20:59.785037 3824 log.go:172] (0xc0006b3130) (0xc00069dc20) Stream removed, broadcasting: 3\nI0526 22:20:59.785048 3824 log.go:172] (0xc0006b3130) (0xc0008f4000) Stream removed, broadcasting: 5\n" May 26 22:20:59.790: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 22:20:59.790: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 22:20:59.832: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=true May 26 22:21:09.837: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 26 22:21:09.837: INFO: Waiting for statefulset status.replicas updated to 0 May 26 22:21:09.858: INFO: Verifying statefulset ss doesn't scale past 1 for another 9.999999683s May 26 22:21:10.863: INFO: Verifying statefulset ss doesn't scale past 1 for another 8.98876955s May 26 22:21:11.869: INFO: Verifying statefulset ss doesn't scale past 1 for another 7.983649913s May 26 22:21:12.874: INFO: Verifying statefulset ss doesn't scale past 1 for another 6.977810149s May 26 22:21:13.879: INFO: Verifying statefulset ss doesn't scale past 1 for another 5.972326165s May 26 22:21:14.884: INFO: Verifying statefulset ss doesn't scale past 1 for another 4.967358094s May 26 22:21:15.889: INFO: Verifying statefulset ss doesn't scale past 1 for another 3.962496812s May 26 22:21:16.895: INFO: Verifying statefulset ss doesn't scale past 1 for another 2.957429307s May 26 22:21:17.899: INFO: Verifying statefulset ss doesn't scale past 1 for another 1.951965862s May 26 22:21:18.904: INFO: Verifying statefulset ss doesn't scale past 1 for another 947.364116ms STEP: Scaling up stateful set ss to 3 replicas and waiting until all of them will be running in namespace statefulset-2194 May 26 22:21:19.909: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2194 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 22:21:20.149: INFO: stderr: "I0526 22:21:20.049603 3846 log.go:172] (0xc000104f20) (0xc000804000) Create stream\nI0526 22:21:20.049654 3846 log.go:172] (0xc000104f20) (0xc000804000) Stream added, broadcasting: 1\nI0526 22:21:20.052853 3846 log.go:172] (0xc000104f20) Reply frame received for 1\nI0526 22:21:20.052979 3846 log.go:172] (0xc000104f20) (0xc0006e1540) Create stream\nI0526 22:21:20.053007 3846 log.go:172] (0xc000104f20) (0xc0006e1540) Stream added, broadcasting: 3\nI0526 22:21:20.054324 3846 log.go:172] (0xc000104f20) Reply frame received for 3\nI0526 22:21:20.054392 3846 log.go:172] (0xc000104f20) (0xc0006281e0) Create stream\nI0526 22:21:20.054420 3846 log.go:172] (0xc000104f20) (0xc0006281e0) Stream added, broadcasting: 5\nI0526 22:21:20.055585 3846 log.go:172] (0xc000104f20) Reply frame received for 5\nI0526 22:21:20.139305 3846 log.go:172] (0xc000104f20) Data frame received for 3\nI0526 22:21:20.139342 3846 log.go:172] (0xc0006e1540) (3) Data frame handling\nI0526 22:21:20.139353 3846 log.go:172] (0xc0006e1540) (3) Data frame sent\nI0526 22:21:20.139360 3846 log.go:172] (0xc000104f20) Data frame received for 3\nI0526 22:21:20.139368 3846 log.go:172] (0xc0006e1540) (3) Data frame handling\nI0526 22:21:20.139393 3846 log.go:172] (0xc000104f20) Data frame received for 5\nI0526 22:21:20.139401 3846 log.go:172] (0xc0006281e0) (5) Data frame handling\nI0526 22:21:20.139409 3846 log.go:172] (0xc0006281e0) (5) Data frame sent\nI0526 22:21:20.139416 3846 log.go:172] (0xc000104f20) Data frame received for 5\nI0526 22:21:20.139423 3846 log.go:172] (0xc0006281e0) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0526 22:21:20.141373 3846 log.go:172] (0xc000104f20) Data frame received for 1\nI0526 22:21:20.141400 3846 log.go:172] (0xc000804000) (1) Data frame handling\nI0526 22:21:20.141424 3846 log.go:172] (0xc000804000) (1) Data frame sent\nI0526 22:21:20.141456 3846 log.go:172] (0xc000104f20) (0xc000804000) Stream removed, broadcasting: 1\nI0526 22:21:20.141582 3846 log.go:172] (0xc000104f20) Go away received\nI0526 22:21:20.141863 3846 log.go:172] (0xc000104f20) (0xc000804000) Stream removed, broadcasting: 1\nI0526 22:21:20.141882 3846 log.go:172] (0xc000104f20) (0xc0006e1540) Stream removed, broadcasting: 3\nI0526 22:21:20.141890 3846 log.go:172] (0xc000104f20) (0xc0006281e0) Stream removed, broadcasting: 5\n" May 26 22:21:20.149: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 22:21:20.149: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 26 22:21:20.153: INFO: Found 1 stateful pods, waiting for 3 May 26 22:21:30.157: INFO: Waiting for pod ss-0 to enter Running - Ready=true, currently Running - Ready=true May 26 22:21:30.157: INFO: Waiting for pod ss-1 to enter Running - Ready=true, currently Running - Ready=true May 26 22:21:30.157: INFO: Waiting for pod ss-2 to enter Running - Ready=true, currently Running - Ready=true STEP: Verifying that stateful set ss was scaled up in order STEP: Scale down will halt with unhealthy stateful pod May 26 22:21:30.163: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2194 ss-0 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 22:21:30.407: INFO: stderr: "I0526 22:21:30.310910 3868 log.go:172] (0xc00090ea50) (0xc000a20000) Create stream\nI0526 22:21:30.310968 3868 log.go:172] (0xc00090ea50) (0xc000a20000) Stream added, broadcasting: 1\nI0526 22:21:30.313511 3868 log.go:172] (0xc00090ea50) Reply frame received for 1\nI0526 22:21:30.313561 3868 log.go:172] (0xc00090ea50) (0xc0006d9ae0) Create stream\nI0526 22:21:30.313576 3868 log.go:172] (0xc00090ea50) (0xc0006d9ae0) Stream added, broadcasting: 3\nI0526 22:21:30.314582 3868 log.go:172] (0xc00090ea50) Reply frame received for 3\nI0526 22:21:30.314630 3868 log.go:172] (0xc00090ea50) (0xc0008ea000) Create stream\nI0526 22:21:30.314642 3868 log.go:172] (0xc00090ea50) (0xc0008ea000) Stream added, broadcasting: 5\nI0526 22:21:30.315447 3868 log.go:172] (0xc00090ea50) Reply frame received for 5\nI0526 22:21:30.401700 3868 log.go:172] (0xc00090ea50) Data frame received for 5\nI0526 22:21:30.401733 3868 log.go:172] (0xc0008ea000) (5) Data frame handling\nI0526 22:21:30.401741 3868 log.go:172] (0xc0008ea000) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 22:21:30.401755 3868 log.go:172] (0xc00090ea50) Data frame received for 3\nI0526 22:21:30.401764 3868 log.go:172] (0xc0006d9ae0) (3) Data frame handling\nI0526 22:21:30.401785 3868 log.go:172] (0xc0006d9ae0) (3) Data frame sent\nI0526 22:21:30.401796 3868 log.go:172] (0xc00090ea50) Data frame received for 3\nI0526 22:21:30.401803 3868 log.go:172] (0xc0006d9ae0) (3) Data frame handling\nI0526 22:21:30.401904 3868 log.go:172] (0xc00090ea50) Data frame received for 5\nI0526 22:21:30.401933 3868 log.go:172] (0xc0008ea000) (5) Data frame handling\nI0526 22:21:30.403510 3868 log.go:172] (0xc00090ea50) Data frame received for 1\nI0526 22:21:30.403539 3868 log.go:172] (0xc000a20000) (1) Data frame handling\nI0526 22:21:30.403570 3868 log.go:172] (0xc000a20000) (1) Data frame sent\nI0526 22:21:30.403584 3868 log.go:172] (0xc00090ea50) (0xc000a20000) Stream removed, broadcasting: 1\nI0526 22:21:30.403599 3868 log.go:172] (0xc00090ea50) Go away received\nI0526 22:21:30.403891 3868 log.go:172] (0xc00090ea50) (0xc000a20000) Stream removed, broadcasting: 1\nI0526 22:21:30.403904 3868 log.go:172] (0xc00090ea50) (0xc0006d9ae0) Stream removed, broadcasting: 3\nI0526 22:21:30.403910 3868 log.go:172] (0xc00090ea50) (0xc0008ea000) Stream removed, broadcasting: 5\n" May 26 22:21:30.407: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 22:21:30.407: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-0: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 22:21:30.407: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2194 ss-1 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 22:21:30.684: INFO: stderr: "I0526 22:21:30.550377 3889 log.go:172] (0xc0009fcd10) (0xc000ad4460) Create stream\nI0526 22:21:30.550434 3889 log.go:172] (0xc0009fcd10) (0xc000ad4460) Stream added, broadcasting: 1\nI0526 22:21:30.553060 3889 log.go:172] (0xc0009fcd10) Reply frame received for 1\nI0526 22:21:30.553083 3889 log.go:172] (0xc0009fcd10) (0xc000bbe500) Create stream\nI0526 22:21:30.553090 3889 log.go:172] (0xc0009fcd10) (0xc000bbe500) Stream added, broadcasting: 3\nI0526 22:21:30.554373 3889 log.go:172] (0xc0009fcd10) Reply frame received for 3\nI0526 22:21:30.554414 3889 log.go:172] (0xc0009fcd10) (0xc000a4e280) Create stream\nI0526 22:21:30.554426 3889 log.go:172] (0xc0009fcd10) (0xc000a4e280) Stream added, broadcasting: 5\nI0526 22:21:30.555139 3889 log.go:172] (0xc0009fcd10) Reply frame received for 5\nI0526 22:21:30.617624 3889 log.go:172] (0xc0009fcd10) Data frame received for 5\nI0526 22:21:30.617649 3889 log.go:172] (0xc000a4e280) (5) Data frame handling\nI0526 22:21:30.617665 3889 log.go:172] (0xc000a4e280) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 22:21:30.676581 3889 log.go:172] (0xc0009fcd10) Data frame received for 5\nI0526 22:21:30.676625 3889 log.go:172] (0xc000a4e280) (5) Data frame handling\nI0526 22:21:30.676657 3889 log.go:172] (0xc0009fcd10) Data frame received for 3\nI0526 22:21:30.676669 3889 log.go:172] (0xc000bbe500) (3) Data frame handling\nI0526 22:21:30.676707 3889 log.go:172] (0xc000bbe500) (3) Data frame sent\nI0526 22:21:30.676729 3889 log.go:172] (0xc0009fcd10) Data frame received for 3\nI0526 22:21:30.676763 3889 log.go:172] (0xc000bbe500) (3) Data frame handling\nI0526 22:21:30.679019 3889 log.go:172] (0xc0009fcd10) Data frame received for 1\nI0526 22:21:30.679038 3889 log.go:172] (0xc000ad4460) (1) Data frame handling\nI0526 22:21:30.679046 3889 log.go:172] (0xc000ad4460) (1) Data frame sent\nI0526 22:21:30.679058 3889 log.go:172] (0xc0009fcd10) (0xc000ad4460) Stream removed, broadcasting: 1\nI0526 22:21:30.679123 3889 log.go:172] (0xc0009fcd10) Go away received\nI0526 22:21:30.679355 3889 log.go:172] (0xc0009fcd10) (0xc000ad4460) Stream removed, broadcasting: 1\nI0526 22:21:30.679369 3889 log.go:172] (0xc0009fcd10) (0xc000bbe500) Stream removed, broadcasting: 3\nI0526 22:21:30.679376 3889 log.go:172] (0xc0009fcd10) (0xc000a4e280) Stream removed, broadcasting: 5\n" May 26 22:21:30.684: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 22:21:30.684: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-1: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 22:21:30.684: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2194 ss-2 -- /bin/sh -x -c mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true' May 26 22:21:30.949: INFO: stderr: "I0526 22:21:30.822991 3909 log.go:172] (0xc00075a8f0) (0xc0008b0000) Create stream\nI0526 22:21:30.823062 3909 log.go:172] (0xc00075a8f0) (0xc0008b0000) Stream added, broadcasting: 1\nI0526 22:21:30.825713 3909 log.go:172] (0xc00075a8f0) Reply frame received for 1\nI0526 22:21:30.825761 3909 log.go:172] (0xc00075a8f0) (0xc0008b00a0) Create stream\nI0526 22:21:30.825775 3909 log.go:172] (0xc00075a8f0) (0xc0008b00a0) Stream added, broadcasting: 3\nI0526 22:21:30.826686 3909 log.go:172] (0xc00075a8f0) Reply frame received for 3\nI0526 22:21:30.826744 3909 log.go:172] (0xc00075a8f0) (0xc0008b0140) Create stream\nI0526 22:21:30.826761 3909 log.go:172] (0xc00075a8f0) (0xc0008b0140) Stream added, broadcasting: 5\nI0526 22:21:30.827654 3909 log.go:172] (0xc00075a8f0) Reply frame received for 5\nI0526 22:21:30.906347 3909 log.go:172] (0xc00075a8f0) Data frame received for 5\nI0526 22:21:30.906369 3909 log.go:172] (0xc0008b0140) (5) Data frame handling\nI0526 22:21:30.906381 3909 log.go:172] (0xc0008b0140) (5) Data frame sent\n+ mv -v /usr/local/apache2/htdocs/index.html /tmp/\nI0526 22:21:30.940450 3909 log.go:172] (0xc00075a8f0) Data frame received for 3\nI0526 22:21:30.940479 3909 log.go:172] (0xc00075a8f0) Data frame received for 5\nI0526 22:21:30.940521 3909 log.go:172] (0xc0008b0140) (5) Data frame handling\nI0526 22:21:30.940558 3909 log.go:172] (0xc0008b00a0) (3) Data frame handling\nI0526 22:21:30.940581 3909 log.go:172] (0xc0008b00a0) (3) Data frame sent\nI0526 22:21:30.940877 3909 log.go:172] (0xc00075a8f0) Data frame received for 3\nI0526 22:21:30.940896 3909 log.go:172] (0xc0008b00a0) (3) Data frame handling\nI0526 22:21:30.942759 3909 log.go:172] (0xc00075a8f0) Data frame received for 1\nI0526 22:21:30.942777 3909 log.go:172] (0xc0008b0000) (1) Data frame handling\nI0526 22:21:30.942791 3909 log.go:172] (0xc0008b0000) (1) Data frame sent\nI0526 22:21:30.942802 3909 log.go:172] (0xc00075a8f0) (0xc0008b0000) Stream removed, broadcasting: 1\nI0526 22:21:30.942811 3909 log.go:172] (0xc00075a8f0) Go away received\nI0526 22:21:30.943101 3909 log.go:172] (0xc00075a8f0) (0xc0008b0000) Stream removed, broadcasting: 1\nI0526 22:21:30.943118 3909 log.go:172] (0xc00075a8f0) (0xc0008b00a0) Stream removed, broadcasting: 3\nI0526 22:21:30.943124 3909 log.go:172] (0xc00075a8f0) (0xc0008b0140) Stream removed, broadcasting: 5\n" May 26 22:21:30.949: INFO: stdout: "'/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html'\n" May 26 22:21:30.950: INFO: stdout of mv -v /usr/local/apache2/htdocs/index.html /tmp/ || true on ss-2: '/usr/local/apache2/htdocs/index.html' -> '/tmp/index.html' May 26 22:21:30.950: INFO: Waiting for statefulset status.replicas updated to 0 May 26 22:21:30.952: INFO: Waiting for stateful set status.readyReplicas to become 0, currently 2 May 26 22:21:40.961: INFO: Waiting for pod ss-0 to enter Running - Ready=false, currently Running - Ready=false May 26 22:21:40.961: INFO: Waiting for pod ss-1 to enter Running - Ready=false, currently Running - Ready=false May 26 22:21:40.961: INFO: Waiting for pod ss-2 to enter Running - Ready=false, currently Running - Ready=false May 26 22:21:40.985: INFO: Verifying statefulset ss doesn't scale past 3 for another 9.999999639s May 26 22:21:41.990: INFO: Verifying statefulset ss doesn't scale past 3 for another 8.982010135s May 26 22:21:42.995: INFO: Verifying statefulset ss doesn't scale past 3 for another 7.977147865s May 26 22:21:44.000: INFO: Verifying statefulset ss doesn't scale past 3 for another 6.972267071s May 26 22:21:45.006: INFO: Verifying statefulset ss doesn't scale past 3 for another 5.96742816s May 26 22:21:46.010: INFO: Verifying statefulset ss doesn't scale past 3 for another 4.96150218s May 26 22:21:47.015: INFO: Verifying statefulset ss doesn't scale past 3 for another 3.956912147s May 26 22:21:48.026: INFO: Verifying statefulset ss doesn't scale past 3 for another 2.952155557s May 26 22:21:49.032: INFO: Verifying statefulset ss doesn't scale past 3 for another 1.941209104s May 26 22:21:50.037: INFO: Verifying statefulset ss doesn't scale past 3 for another 935.558885ms STEP: Scaling down stateful set ss to 0 replicas and waiting until none of pods will run in namespacestatefulset-2194 May 26 22:21:51.043: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2194 ss-0 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 22:21:51.267: INFO: stderr: "I0526 22:21:51.173699 3932 log.go:172] (0xc0000f4bb0) (0xc00083da40) Create stream\nI0526 22:21:51.173763 3932 log.go:172] (0xc0000f4bb0) (0xc00083da40) Stream added, broadcasting: 1\nI0526 22:21:51.175702 3932 log.go:172] (0xc0000f4bb0) Reply frame received for 1\nI0526 22:21:51.175740 3932 log.go:172] (0xc0000f4bb0) (0xc000b68000) Create stream\nI0526 22:21:51.175749 3932 log.go:172] (0xc0000f4bb0) (0xc000b68000) Stream added, broadcasting: 3\nI0526 22:21:51.176797 3932 log.go:172] (0xc0000f4bb0) Reply frame received for 3\nI0526 22:21:51.176834 3932 log.go:172] (0xc0000f4bb0) (0xc00082a000) Create stream\nI0526 22:21:51.176845 3932 log.go:172] (0xc0000f4bb0) (0xc00082a000) Stream added, broadcasting: 5\nI0526 22:21:51.177890 3932 log.go:172] (0xc0000f4bb0) Reply frame received for 5\nI0526 22:21:51.260451 3932 log.go:172] (0xc0000f4bb0) Data frame received for 3\nI0526 22:21:51.260497 3932 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0526 22:21:51.260536 3932 log.go:172] (0xc00082a000) (5) Data frame handling\nI0526 22:21:51.260563 3932 log.go:172] (0xc00082a000) (5) Data frame sent\nI0526 22:21:51.260571 3932 log.go:172] (0xc0000f4bb0) Data frame received for 5\nI0526 22:21:51.260577 3932 log.go:172] (0xc00082a000) (5) Data frame handling\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0526 22:21:51.260598 3932 log.go:172] (0xc000b68000) (3) Data frame handling\nI0526 22:21:51.260609 3932 log.go:172] (0xc000b68000) (3) Data frame sent\nI0526 22:21:51.260750 3932 log.go:172] (0xc0000f4bb0) Data frame received for 3\nI0526 22:21:51.260761 3932 log.go:172] (0xc000b68000) (3) Data frame handling\nI0526 22:21:51.262619 3932 log.go:172] (0xc0000f4bb0) Data frame received for 1\nI0526 22:21:51.262644 3932 log.go:172] (0xc00083da40) (1) Data frame handling\nI0526 22:21:51.262661 3932 log.go:172] (0xc00083da40) (1) Data frame sent\nI0526 22:21:51.262672 3932 log.go:172] (0xc0000f4bb0) (0xc00083da40) Stream removed, broadcasting: 1\nI0526 22:21:51.262688 3932 log.go:172] (0xc0000f4bb0) Go away received\nI0526 22:21:51.263083 3932 log.go:172] (0xc0000f4bb0) (0xc00083da40) Stream removed, broadcasting: 1\nI0526 22:21:51.263103 3932 log.go:172] (0xc0000f4bb0) (0xc000b68000) Stream removed, broadcasting: 3\nI0526 22:21:51.263112 3932 log.go:172] (0xc0000f4bb0) (0xc00082a000) Stream removed, broadcasting: 5\n" May 26 22:21:51.267: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 22:21:51.267: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-0: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 26 22:21:51.267: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2194 ss-1 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 22:21:51.474: INFO: stderr: "I0526 22:21:51.399191 3952 log.go:172] (0xc0000f5290) (0xc0005e7b80) Create stream\nI0526 22:21:51.399247 3952 log.go:172] (0xc0000f5290) (0xc0005e7b80) Stream added, broadcasting: 1\nI0526 22:21:51.402597 3952 log.go:172] (0xc0000f5290) Reply frame received for 1\nI0526 22:21:51.402652 3952 log.go:172] (0xc0000f5290) (0xc00092e000) Create stream\nI0526 22:21:51.402670 3952 log.go:172] (0xc0000f5290) (0xc00092e000) Stream added, broadcasting: 3\nI0526 22:21:51.403683 3952 log.go:172] (0xc0000f5290) Reply frame received for 3\nI0526 22:21:51.403702 3952 log.go:172] (0xc0000f5290) (0xc0005e7d60) Create stream\nI0526 22:21:51.403708 3952 log.go:172] (0xc0000f5290) (0xc0005e7d60) Stream added, broadcasting: 5\nI0526 22:21:51.404544 3952 log.go:172] (0xc0000f5290) Reply frame received for 5\nI0526 22:21:51.467360 3952 log.go:172] (0xc0000f5290) Data frame received for 3\nI0526 22:21:51.467397 3952 log.go:172] (0xc00092e000) (3) Data frame handling\nI0526 22:21:51.467421 3952 log.go:172] (0xc0000f5290) Data frame received for 5\nI0526 22:21:51.467458 3952 log.go:172] (0xc0005e7d60) (5) Data frame handling\nI0526 22:21:51.467479 3952 log.go:172] (0xc0005e7d60) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0526 22:21:51.467541 3952 log.go:172] (0xc00092e000) (3) Data frame sent\nI0526 22:21:51.467566 3952 log.go:172] (0xc0000f5290) Data frame received for 3\nI0526 22:21:51.467590 3952 log.go:172] (0xc0000f5290) Data frame received for 5\nI0526 22:21:51.467629 3952 log.go:172] (0xc0005e7d60) (5) Data frame handling\nI0526 22:21:51.467653 3952 log.go:172] (0xc00092e000) (3) Data frame handling\nI0526 22:21:51.468731 3952 log.go:172] (0xc0000f5290) Data frame received for 1\nI0526 22:21:51.468750 3952 log.go:172] (0xc0005e7b80) (1) Data frame handling\nI0526 22:21:51.468758 3952 log.go:172] (0xc0005e7b80) (1) Data frame sent\nI0526 22:21:51.468767 3952 log.go:172] (0xc0000f5290) (0xc0005e7b80) Stream removed, broadcasting: 1\nI0526 22:21:51.468797 3952 log.go:172] (0xc0000f5290) Go away received\nI0526 22:21:51.469016 3952 log.go:172] (0xc0000f5290) (0xc0005e7b80) Stream removed, broadcasting: 1\nI0526 22:21:51.469029 3952 log.go:172] (0xc0000f5290) (0xc00092e000) Stream removed, broadcasting: 3\nI0526 22:21:51.469036 3952 log.go:172] (0xc0000f5290) (0xc0005e7d60) Stream removed, broadcasting: 5\n" May 26 22:21:51.474: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 22:21:51.474: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-1: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 26 22:21:51.475: INFO: Running '/usr/local/bin/kubectl --kubeconfig=/root/.kube/config exec --namespace=statefulset-2194 ss-2 -- /bin/sh -x -c mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true' May 26 22:21:51.663: INFO: stderr: "I0526 22:21:51.592287 3976 log.go:172] (0xc000c07c30) (0xc000aaf680) Create stream\nI0526 22:21:51.592336 3976 log.go:172] (0xc000c07c30) (0xc000aaf680) Stream added, broadcasting: 1\nI0526 22:21:51.593964 3976 log.go:172] (0xc000c07c30) Reply frame received for 1\nI0526 22:21:51.594004 3976 log.go:172] (0xc000c07c30) (0xc000a8c640) Create stream\nI0526 22:21:51.594013 3976 log.go:172] (0xc000c07c30) (0xc000a8c640) Stream added, broadcasting: 3\nI0526 22:21:51.594717 3976 log.go:172] (0xc000c07c30) Reply frame received for 3\nI0526 22:21:51.594741 3976 log.go:172] (0xc000c07c30) (0xc0009da0a0) Create stream\nI0526 22:21:51.594750 3976 log.go:172] (0xc000c07c30) (0xc0009da0a0) Stream added, broadcasting: 5\nI0526 22:21:51.595468 3976 log.go:172] (0xc000c07c30) Reply frame received for 5\nI0526 22:21:51.654939 3976 log.go:172] (0xc000c07c30) Data frame received for 5\nI0526 22:21:51.655008 3976 log.go:172] (0xc0009da0a0) (5) Data frame handling\nI0526 22:21:51.655038 3976 log.go:172] (0xc0009da0a0) (5) Data frame sent\n+ mv -v /tmp/index.html /usr/local/apache2/htdocs/\nI0526 22:21:51.655066 3976 log.go:172] (0xc000c07c30) Data frame received for 3\nI0526 22:21:51.655081 3976 log.go:172] (0xc000a8c640) (3) Data frame handling\nI0526 22:21:51.655102 3976 log.go:172] (0xc000c07c30) Data frame received for 5\nI0526 22:21:51.655121 3976 log.go:172] (0xc0009da0a0) (5) Data frame handling\nI0526 22:21:51.655142 3976 log.go:172] (0xc000a8c640) (3) Data frame sent\nI0526 22:21:51.655155 3976 log.go:172] (0xc000c07c30) Data frame received for 3\nI0526 22:21:51.655168 3976 log.go:172] (0xc000a8c640) (3) Data frame handling\nI0526 22:21:51.656863 3976 log.go:172] (0xc000c07c30) Data frame received for 1\nI0526 22:21:51.656900 3976 log.go:172] (0xc000aaf680) (1) Data frame handling\nI0526 22:21:51.656934 3976 log.go:172] (0xc000aaf680) (1) Data frame sent\nI0526 22:21:51.656956 3976 log.go:172] (0xc000c07c30) (0xc000aaf680) Stream removed, broadcasting: 1\nI0526 22:21:51.656983 3976 log.go:172] (0xc000c07c30) Go away received\nI0526 22:21:51.657792 3976 log.go:172] (0xc000c07c30) (0xc000aaf680) Stream removed, broadcasting: 1\nI0526 22:21:51.657812 3976 log.go:172] (0xc000c07c30) (0xc000a8c640) Stream removed, broadcasting: 3\nI0526 22:21:51.657824 3976 log.go:172] (0xc000c07c30) (0xc0009da0a0) Stream removed, broadcasting: 5\n" May 26 22:21:51.663: INFO: stdout: "'/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html'\n" May 26 22:21:51.663: INFO: stdout of mv -v /tmp/index.html /usr/local/apache2/htdocs/ || true on ss-2: '/tmp/index.html' -> '/usr/local/apache2/htdocs/index.html' May 26 22:21:51.663: INFO: Scaling statefulset ss to 0 STEP: Verifying that stateful set ss was scaled down in reverse order [AfterEach] [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/statefulset.go:90 May 26 22:22:11.678: INFO: Deleting all statefulset in ns statefulset-2194 May 26 22:22:11.680: INFO: Scaling statefulset ss to 0 May 26 22:22:11.688: INFO: Waiting for statefulset status.replicas updated to 0 May 26 22:22:11.690: INFO: Deleting statefulset ss [AfterEach] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:22:11.707: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "statefulset-2194" for this suite. • [SLOW TEST:82.342 seconds] [sig-apps] StatefulSet /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/apps/framework.go:23 [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:716 Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-apps] StatefulSet [k8s.io] Basic StatefulSet functionality [StatefulSetBasic] Scaling should happen in predictable order and halt if any stateful pod is unhealthy [Slow] [Conformance]","total":278,"completed":274,"skipped":4370,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:22:11.713: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide container's memory request [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 22:22:11.779: INFO: Waiting up to 5m0s for pod "downwardapi-volume-3534ff41-c241-4a87-8e64-c00b8f0f1066" in namespace "projected-6778" to be "success or failure" May 26 22:22:11.782: INFO: Pod "downwardapi-volume-3534ff41-c241-4a87-8e64-c00b8f0f1066": Phase="Pending", Reason="", readiness=false. Elapsed: 2.8351ms May 26 22:22:13.786: INFO: Pod "downwardapi-volume-3534ff41-c241-4a87-8e64-c00b8f0f1066": Phase="Pending", Reason="", readiness=false. Elapsed: 2.007256876s May 26 22:22:15.791: INFO: Pod "downwardapi-volume-3534ff41-c241-4a87-8e64-c00b8f0f1066": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.011703781s STEP: Saw pod success May 26 22:22:15.791: INFO: Pod "downwardapi-volume-3534ff41-c241-4a87-8e64-c00b8f0f1066" satisfied condition "success or failure" May 26 22:22:15.794: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-3534ff41-c241-4a87-8e64-c00b8f0f1066 container client-container: STEP: delete the pod May 26 22:22:15.837: INFO: Waiting for pod downwardapi-volume-3534ff41-c241-4a87-8e64-c00b8f0f1066 to disappear May 26 22:22:15.874: INFO: Pod downwardapi-volume-3534ff41-c241-4a87-8e64-c00b8f0f1066 no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:22:15.874: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-6778" for this suite. •{"msg":"PASSED [sig-storage] Projected downwardAPI should provide container's memory request [NodeConformance] [Conformance]","total":278,"completed":275,"skipped":4408,"failed":0} SSSSSSSSSSSSSSSS ------------------------------ [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:22:15.883: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename projected STEP: Waiting for a default service account to be provisioned in namespace [BeforeEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:40 [It] should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test downward API volume plugin May 26 22:22:15.930: INFO: Waiting up to 5m0s for pod "downwardapi-volume-19e53be0-d0a5-4c33-8304-9dc22d44b30b" in namespace "projected-9226" to be "success or failure" May 26 22:22:15.932: INFO: Pod "downwardapi-volume-19e53be0-d0a5-4c33-8304-9dc22d44b30b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.764739ms May 26 22:22:18.025: INFO: Pod "downwardapi-volume-19e53be0-d0a5-4c33-8304-9dc22d44b30b": Phase="Pending", Reason="", readiness=false. Elapsed: 2.095365562s May 26 22:22:20.029: INFO: Pod "downwardapi-volume-19e53be0-d0a5-4c33-8304-9dc22d44b30b": Phase="Running", Reason="", readiness=true. Elapsed: 4.099654055s May 26 22:22:22.034: INFO: Pod "downwardapi-volume-19e53be0-d0a5-4c33-8304-9dc22d44b30b": Phase="Succeeded", Reason="", readiness=false. Elapsed: 6.104617846s STEP: Saw pod success May 26 22:22:22.034: INFO: Pod "downwardapi-volume-19e53be0-d0a5-4c33-8304-9dc22d44b30b" satisfied condition "success or failure" May 26 22:22:22.038: INFO: Trying to get logs from node jerma-worker2 pod downwardapi-volume-19e53be0-d0a5-4c33-8304-9dc22d44b30b container client-container: STEP: delete the pod May 26 22:22:22.064: INFO: Waiting for pod downwardapi-volume-19e53be0-d0a5-4c33-8304-9dc22d44b30b to disappear May 26 22:22:22.067: INFO: Pod downwardapi-volume-19e53be0-d0a5-4c33-8304-9dc22d44b30b no longer exists [AfterEach] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:22:22.067: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "projected-9226" for this suite. • [SLOW TEST:6.191 seconds] [sig-storage] Projected downwardAPI /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/common/projected_downwardapi.go:34 should provide podname only [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 ------------------------------ {"msg":"PASSED [sig-storage] Projected downwardAPI should provide podname only [NodeConformance] [Conformance]","total":278,"completed":276,"skipped":4424,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:22:22.076: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename emptydir STEP: Waiting for a default service account to be provisioned in namespace [It] volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 STEP: Creating a pod to test emptydir volume type on tmpfs May 26 22:22:22.204: INFO: Waiting up to 5m0s for pod "pod-8f5b66e2-1821-47b7-9539-bb2979c37b53" in namespace "emptydir-5987" to be "success or failure" May 26 22:22:22.212: INFO: Pod "pod-8f5b66e2-1821-47b7-9539-bb2979c37b53": Phase="Pending", Reason="", readiness=false. Elapsed: 7.824962ms May 26 22:22:24.367: INFO: Pod "pod-8f5b66e2-1821-47b7-9539-bb2979c37b53": Phase="Pending", Reason="", readiness=false. Elapsed: 2.162978884s May 26 22:22:26.372: INFO: Pod "pod-8f5b66e2-1821-47b7-9539-bb2979c37b53": Phase="Succeeded", Reason="", readiness=false. Elapsed: 4.16770789s STEP: Saw pod success May 26 22:22:26.372: INFO: Pod "pod-8f5b66e2-1821-47b7-9539-bb2979c37b53" satisfied condition "success or failure" May 26 22:22:26.375: INFO: Trying to get logs from node jerma-worker pod pod-8f5b66e2-1821-47b7-9539-bb2979c37b53 container test-container: STEP: delete the pod May 26 22:22:26.422: INFO: Waiting for pod pod-8f5b66e2-1821-47b7-9539-bb2979c37b53 to disappear May 26 22:22:26.433: INFO: Pod pod-8f5b66e2-1821-47b7-9539-bb2979c37b53 no longer exists [AfterEach] [sig-storage] EmptyDir volumes /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:22:26.433: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "emptydir-5987" for this suite. •{"msg":"PASSED [sig-storage] EmptyDir volumes volume on tmpfs should have the correct mode [LinuxOnly] [NodeConformance] [Conformance]","total":278,"completed":277,"skipped":4485,"failed":0} SSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSSS ------------------------------ [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [BeforeEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:151 STEP: Creating a kubernetes client May 26 22:22:26.441: INFO: >>> kubeConfig: /root/.kube/config STEP: Building a namespace api object, basename containers STEP: Waiting for a default service account to be provisioned in namespace [It] should use the image defaults if command and args are blank [NodeConformance] [Conformance] /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:721 [AfterEach] [k8s.io] Docker Containers /src/k8s.io/kubernetes/_output/local/go/src/k8s.io/kubernetes/test/e2e/framework/framework.go:152 May 26 22:22:30.534: INFO: Waiting up to 3m0s for all (but 0) nodes to be ready STEP: Destroying namespace "containers-4903" for this suite. •{"msg":"PASSED [k8s.io] Docker Containers should use the image defaults if command and args are blank [NodeConformance] [Conformance]","total":278,"completed":278,"skipped":4549,"failed":0} SSSSSSSSSSSSSSSMay 26 22:22:30.543: INFO: Running AfterSuite actions on all nodes May 26 22:22:30.543: INFO: Running AfterSuite actions on node 1 May 26 22:22:30.543: INFO: Skipping dumping logs from cluster {"msg":"Test Suite completed","total":278,"completed":278,"skipped":4564,"failed":0} Ran 278 of 4842 Specs in 4385.672 seconds SUCCESS! -- 278 Passed | 0 Failed | 0 Pending | 4564 Skipped PASS